2026-04-01 00:00:08.503970 | Job console starting 2026-04-01 00:00:08.519266 | Updating git repos 2026-04-01 00:00:09.122434 | Cloning repos into workspace 2026-04-01 00:00:09.581704 | Restoring repo states 2026-04-01 00:00:09.604583 | Merging changes 2026-04-01 00:00:09.604605 | Checking out repos 2026-04-01 00:00:10.006755 | Preparing playbooks 2026-04-01 00:00:11.173284 | Running Ansible setup 2026-04-01 00:00:18.830688 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-04-01 00:00:20.068562 | 2026-04-01 00:00:20.068694 | PLAY [Base pre] 2026-04-01 00:00:20.106543 | 2026-04-01 00:00:20.106672 | TASK [Setup log path fact] 2026-04-01 00:00:20.138421 | orchestrator | ok 2026-04-01 00:00:20.175212 | 2026-04-01 00:00:20.175353 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-04-01 00:00:20.214991 | orchestrator | ok 2026-04-01 00:00:20.246002 | 2026-04-01 00:00:20.246123 | TASK [emit-job-header : Print job information] 2026-04-01 00:00:20.330884 | # Job Information 2026-04-01 00:00:20.331053 | Ansible Version: 2.16.14 2026-04-01 00:00:20.331087 | Job: testbed-deploy-stable-in-a-nutshell-with-tempest-ubuntu-24.04 2026-04-01 00:00:20.331121 | Pipeline: periodic-midnight 2026-04-01 00:00:20.331145 | Executor: 521e9411259a 2026-04-01 00:00:20.331166 | Triggered by: https://github.com/osism/testbed 2026-04-01 00:00:20.331188 | Event ID: f6683d5454e7445eb5bfe1b19b48e70b 2026-04-01 00:00:20.351465 | 2026-04-01 00:00:20.351582 | LOOP [emit-job-header : Print node information] 2026-04-01 00:00:20.602281 | orchestrator | ok: 2026-04-01 00:00:20.602426 | orchestrator | # Node Information 2026-04-01 00:00:20.602455 | orchestrator | Inventory Hostname: orchestrator 2026-04-01 00:00:20.602476 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-04-01 00:00:20.602494 | orchestrator | Username: zuul-testbed01 2026-04-01 00:00:20.602511 | orchestrator | Distro: Debian 12.13 2026-04-01 00:00:20.602531 | orchestrator | Provider: static-testbed 2026-04-01 00:00:20.602548 | orchestrator | Region: 2026-04-01 00:00:20.602565 | orchestrator | Label: testbed-orchestrator 2026-04-01 00:00:20.602582 | orchestrator | Product Name: OpenStack Nova 2026-04-01 00:00:20.602598 | orchestrator | Interface IP: 81.163.193.140 2026-04-01 00:00:20.614444 | 2026-04-01 00:00:20.614540 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-04-01 00:00:22.240025 | orchestrator -> localhost | changed 2026-04-01 00:00:22.246304 | 2026-04-01 00:00:22.246396 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-04-01 00:00:24.726068 | orchestrator -> localhost | changed 2026-04-01 00:00:24.755453 | 2026-04-01 00:00:24.755548 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-04-01 00:00:25.675366 | orchestrator -> localhost | ok 2026-04-01 00:00:25.681102 | 2026-04-01 00:00:25.681187 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-04-01 00:00:25.728909 | orchestrator | ok 2026-04-01 00:00:25.758641 | orchestrator | included: /var/lib/zuul/builds/c24d998d74c248cb905c5d59acbcdaec/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-04-01 00:00:25.778687 | 2026-04-01 00:00:25.778785 | TASK [add-build-sshkey : Create Temp SSH key] 2026-04-01 00:00:28.004226 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-04-01 00:00:28.004386 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/c24d998d74c248cb905c5d59acbcdaec/work/c24d998d74c248cb905c5d59acbcdaec_id_rsa 2026-04-01 00:00:28.004416 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/c24d998d74c248cb905c5d59acbcdaec/work/c24d998d74c248cb905c5d59acbcdaec_id_rsa.pub 2026-04-01 00:00:28.004439 | orchestrator -> localhost | The key fingerprint is: 2026-04-01 00:00:28.004460 | orchestrator -> localhost | SHA256:9Lexm6lJfqAy2APDsFZHvP8bOIV3hFAPb7Umf6Q3d2Q zuul-build-sshkey 2026-04-01 00:00:28.004479 | orchestrator -> localhost | The key's randomart image is: 2026-04-01 00:00:28.004508 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-04-01 00:00:28.004526 | orchestrator -> localhost | | . ..o . | 2026-04-01 00:00:28.004544 | orchestrator -> localhost | | o . = . . | 2026-04-01 00:00:28.004561 | orchestrator -> localhost | | . .. . * o E| 2026-04-01 00:00:28.004576 | orchestrator -> localhost | | . . o. o o + = | 2026-04-01 00:00:28.004593 | orchestrator -> localhost | | = . .S + + o.=| 2026-04-01 00:00:28.004614 | orchestrator -> localhost | | o + .+.o + o+| 2026-04-01 00:00:28.004631 | orchestrator -> localhost | | . = ooo.o | 2026-04-01 00:00:28.004648 | orchestrator -> localhost | | . = .+.o.+ | 2026-04-01 00:00:28.004666 | orchestrator -> localhost | | + =++ | 2026-04-01 00:00:28.004682 | orchestrator -> localhost | +----[SHA256]-----+ 2026-04-01 00:00:28.004722 | orchestrator -> localhost | ok: Runtime: 0:00:01.293829 2026-04-01 00:00:28.013820 | 2026-04-01 00:00:28.013924 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-04-01 00:00:28.057358 | orchestrator | ok 2026-04-01 00:00:28.070695 | orchestrator | included: /var/lib/zuul/builds/c24d998d74c248cb905c5d59acbcdaec/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-04-01 00:00:28.089543 | 2026-04-01 00:00:28.089635 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-04-01 00:00:28.147179 | orchestrator | skipping: Conditional result was False 2026-04-01 00:00:28.154014 | 2026-04-01 00:00:28.154116 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-04-01 00:00:28.864724 | orchestrator | changed 2026-04-01 00:00:28.873506 | 2026-04-01 00:00:28.873918 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-04-01 00:00:29.187443 | orchestrator | ok 2026-04-01 00:00:29.195834 | 2026-04-01 00:00:29.195943 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-04-01 00:00:29.761835 | orchestrator | ok 2026-04-01 00:00:29.767002 | 2026-04-01 00:00:29.767081 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-04-01 00:00:30.252386 | orchestrator | ok 2026-04-01 00:00:30.267070 | 2026-04-01 00:00:30.267189 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-04-01 00:00:30.313197 | orchestrator | skipping: Conditional result was False 2026-04-01 00:00:30.319691 | 2026-04-01 00:00:30.319785 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-04-01 00:00:31.717814 | orchestrator -> localhost | changed 2026-04-01 00:00:31.729371 | 2026-04-01 00:00:31.729458 | TASK [add-build-sshkey : Add back temp key] 2026-04-01 00:00:32.291093 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/c24d998d74c248cb905c5d59acbcdaec/work/c24d998d74c248cb905c5d59acbcdaec_id_rsa (zuul-build-sshkey) 2026-04-01 00:00:32.291297 | orchestrator -> localhost | ok: Runtime: 0:00:00.025297 2026-04-01 00:00:32.297158 | 2026-04-01 00:00:32.297243 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-04-01 00:00:32.845530 | orchestrator | ok 2026-04-01 00:00:32.850298 | 2026-04-01 00:00:32.850379 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-04-01 00:00:32.897137 | orchestrator | skipping: Conditional result was False 2026-04-01 00:00:33.057926 | 2026-04-01 00:00:33.058028 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-04-01 00:00:33.657974 | orchestrator | ok 2026-04-01 00:00:33.666792 | 2026-04-01 00:00:33.686956 | TASK [validate-host : Define zuul_info_dir fact] 2026-04-01 00:00:33.734152 | orchestrator | ok 2026-04-01 00:00:33.745393 | 2026-04-01 00:00:33.745483 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-04-01 00:00:34.802096 | orchestrator -> localhost | ok 2026-04-01 00:00:34.808137 | 2026-04-01 00:00:34.808222 | TASK [validate-host : Collect information about the host] 2026-04-01 00:00:36.431195 | orchestrator | ok 2026-04-01 00:00:36.466142 | 2026-04-01 00:00:36.466261 | TASK [validate-host : Sanitize hostname] 2026-04-01 00:00:36.578728 | orchestrator | ok 2026-04-01 00:00:36.591449 | 2026-04-01 00:00:36.591565 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-04-01 00:00:38.464257 | orchestrator -> localhost | changed 2026-04-01 00:00:38.475710 | 2026-04-01 00:00:38.480207 | TASK [validate-host : Collect information about zuul worker] 2026-04-01 00:00:39.199673 | orchestrator | ok 2026-04-01 00:00:39.215512 | 2026-04-01 00:00:39.215618 | TASK [validate-host : Write out all zuul information for each host] 2026-04-01 00:00:41.011399 | orchestrator -> localhost | changed 2026-04-01 00:00:41.024921 | 2026-04-01 00:00:41.025023 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-04-01 00:00:41.315502 | orchestrator | ok 2026-04-01 00:00:41.321212 | 2026-04-01 00:00:41.321303 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-04-01 00:02:02.108566 | orchestrator | changed: 2026-04-01 00:02:02.110296 | orchestrator | .d..t...... src/ 2026-04-01 00:02:02.110474 | orchestrator | .d..t...... src/github.com/ 2026-04-01 00:02:02.110545 | orchestrator | .d..t...... src/github.com/osism/ 2026-04-01 00:02:02.110600 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-04-01 00:02:02.110652 | orchestrator | RedHat.yml 2026-04-01 00:02:02.135687 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-04-01 00:02:02.135708 | orchestrator | RedHat.yml 2026-04-01 00:02:02.135771 | orchestrator | = 2.2.0"... 2026-04-01 00:02:17.783730 | orchestrator | - Finding latest version of hashicorp/null... 2026-04-01 00:02:17.803727 | orchestrator | - Finding terraform-provider-openstack/openstack versions matching ">= 1.53.0"... 2026-04-01 00:02:18.014423 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-04-01 00:02:18.812623 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-04-01 00:02:18.887967 | orchestrator | - Installing hashicorp/local v2.7.0... 2026-04-01 00:02:19.609884 | orchestrator | - Installed hashicorp/local v2.7.0 (signed, key ID 0C0AF313E5FD9F80) 2026-04-01 00:02:19.686089 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-04-01 00:02:20.224150 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-04-01 00:02:20.224208 | orchestrator | 2026-04-01 00:02:20.224215 | orchestrator | Providers are signed by their developers. 2026-04-01 00:02:20.224220 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-04-01 00:02:20.224224 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-04-01 00:02:20.224231 | orchestrator | 2026-04-01 00:02:20.224236 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-04-01 00:02:20.224240 | orchestrator | selections it made above. Include this file in your version control repository 2026-04-01 00:02:20.224251 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-04-01 00:02:20.224255 | orchestrator | you run "tofu init" in the future. 2026-04-01 00:02:20.224259 | orchestrator | 2026-04-01 00:02:20.224263 | orchestrator | OpenTofu has been successfully initialized! 2026-04-01 00:02:20.224267 | orchestrator | 2026-04-01 00:02:20.224270 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-04-01 00:02:20.224274 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-04-01 00:02:20.224278 | orchestrator | should now work. 2026-04-01 00:02:20.224283 | orchestrator | 2026-04-01 00:02:20.224286 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-04-01 00:02:20.224290 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-04-01 00:02:20.224294 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-04-01 00:02:20.446420 | orchestrator | Created and switched to workspace "ci"! 2026-04-01 00:02:20.446478 | orchestrator | 2026-04-01 00:02:20.446485 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-04-01 00:02:20.446491 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-04-01 00:02:20.446510 | orchestrator | for this configuration. 2026-04-01 00:02:21.117269 | orchestrator | ci.auto.tfvars 2026-04-01 00:02:21.119760 | orchestrator | default_custom.tf 2026-04-01 00:02:22.332804 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-04-01 00:02:22.916170 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-04-01 00:02:24.758766 | orchestrator | 2026-04-01 00:02:24.758839 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-04-01 00:02:24.758849 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-04-01 00:02:24.758853 | orchestrator | + create 2026-04-01 00:02:24.758867 | orchestrator | <= read (data resources) 2026-04-01 00:02:24.758871 | orchestrator | 2026-04-01 00:02:24.758876 | orchestrator | OpenTofu will perform the following actions: 2026-04-01 00:02:24.758880 | orchestrator | 2026-04-01 00:02:24.758885 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-04-01 00:02:24.758889 | orchestrator | # (config refers to values not yet known) 2026-04-01 00:02:24.758893 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-04-01 00:02:24.758898 | orchestrator | + checksum = (known after apply) 2026-04-01 00:02:24.758902 | orchestrator | + created_at = (known after apply) 2026-04-01 00:02:24.758906 | orchestrator | + file = (known after apply) 2026-04-01 00:02:24.758910 | orchestrator | + id = (known after apply) 2026-04-01 00:02:24.758933 | orchestrator | + metadata = (known after apply) 2026-04-01 00:02:24.758937 | orchestrator | + min_disk_gb = (known after apply) 2026-04-01 00:02:24.758941 | orchestrator | + min_ram_mb = (known after apply) 2026-04-01 00:02:24.758945 | orchestrator | + most_recent = true 2026-04-01 00:02:24.758949 | orchestrator | + name = (known after apply) 2026-04-01 00:02:24.758953 | orchestrator | + protected = (known after apply) 2026-04-01 00:02:24.758957 | orchestrator | + region = (known after apply) 2026-04-01 00:02:24.758963 | orchestrator | + schema = (known after apply) 2026-04-01 00:02:24.758967 | orchestrator | + size_bytes = (known after apply) 2026-04-01 00:02:24.758971 | orchestrator | + tags = (known after apply) 2026-04-01 00:02:24.758974 | orchestrator | + updated_at = (known after apply) 2026-04-01 00:02:24.758978 | orchestrator | } 2026-04-01 00:02:24.758984 | orchestrator | 2026-04-01 00:02:24.758988 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-04-01 00:02:24.759018 | orchestrator | # (config refers to values not yet known) 2026-04-01 00:02:24.759023 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-04-01 00:02:24.759027 | orchestrator | + checksum = (known after apply) 2026-04-01 00:02:24.759031 | orchestrator | + created_at = (known after apply) 2026-04-01 00:02:24.759035 | orchestrator | + file = (known after apply) 2026-04-01 00:02:24.759039 | orchestrator | + id = (known after apply) 2026-04-01 00:02:24.759042 | orchestrator | + metadata = (known after apply) 2026-04-01 00:02:24.759046 | orchestrator | + min_disk_gb = (known after apply) 2026-04-01 00:02:24.759050 | orchestrator | + min_ram_mb = (known after apply) 2026-04-01 00:02:24.759054 | orchestrator | + most_recent = true 2026-04-01 00:02:24.759058 | orchestrator | + name = (known after apply) 2026-04-01 00:02:24.759062 | orchestrator | + protected = (known after apply) 2026-04-01 00:02:24.759065 | orchestrator | + region = (known after apply) 2026-04-01 00:02:24.759069 | orchestrator | + schema = (known after apply) 2026-04-01 00:02:24.759073 | orchestrator | + size_bytes = (known after apply) 2026-04-01 00:02:24.759077 | orchestrator | + tags = (known after apply) 2026-04-01 00:02:24.759080 | orchestrator | + updated_at = (known after apply) 2026-04-01 00:02:24.759084 | orchestrator | } 2026-04-01 00:02:24.759088 | orchestrator | 2026-04-01 00:02:24.759092 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-04-01 00:02:24.759096 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-04-01 00:02:24.759100 | orchestrator | + content = (known after apply) 2026-04-01 00:02:24.759104 | orchestrator | + content_base64sha256 = (known after apply) 2026-04-01 00:02:24.759107 | orchestrator | + content_base64sha512 = (known after apply) 2026-04-01 00:02:24.759111 | orchestrator | + content_md5 = (known after apply) 2026-04-01 00:02:24.759115 | orchestrator | + content_sha1 = (known after apply) 2026-04-01 00:02:24.759119 | orchestrator | + content_sha256 = (known after apply) 2026-04-01 00:02:24.759122 | orchestrator | + content_sha512 = (known after apply) 2026-04-01 00:02:24.759126 | orchestrator | + directory_permission = "0777" 2026-04-01 00:02:24.759130 | orchestrator | + file_permission = "0644" 2026-04-01 00:02:24.759134 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-04-01 00:02:24.759137 | orchestrator | + id = (known after apply) 2026-04-01 00:02:24.759141 | orchestrator | } 2026-04-01 00:02:24.759147 | orchestrator | 2026-04-01 00:02:24.759151 | orchestrator | # local_file.id_rsa_pub will be created 2026-04-01 00:02:24.759155 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-04-01 00:02:24.759158 | orchestrator | + content = (known after apply) 2026-04-01 00:02:24.759162 | orchestrator | + content_base64sha256 = (known after apply) 2026-04-01 00:02:24.759166 | orchestrator | + content_base64sha512 = (known after apply) 2026-04-01 00:02:24.759170 | orchestrator | + content_md5 = (known after apply) 2026-04-01 00:02:24.759173 | orchestrator | + content_sha1 = (known after apply) 2026-04-01 00:02:24.759177 | orchestrator | + content_sha256 = (known after apply) 2026-04-01 00:02:24.759181 | orchestrator | + content_sha512 = (known after apply) 2026-04-01 00:02:24.759185 | orchestrator | + directory_permission = "0777" 2026-04-01 00:02:24.759188 | orchestrator | + file_permission = "0644" 2026-04-01 00:02:24.759197 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-04-01 00:02:24.759201 | orchestrator | + id = (known after apply) 2026-04-01 00:02:24.759205 | orchestrator | } 2026-04-01 00:02:24.759209 | orchestrator | 2026-04-01 00:02:24.759220 | orchestrator | # local_file.inventory will be created 2026-04-01 00:02:24.759224 | orchestrator | + resource "local_file" "inventory" { 2026-04-01 00:02:24.759228 | orchestrator | + content = (known after apply) 2026-04-01 00:02:24.759231 | orchestrator | + content_base64sha256 = (known after apply) 2026-04-01 00:02:24.759235 | orchestrator | + content_base64sha512 = (known after apply) 2026-04-01 00:02:24.759239 | orchestrator | + content_md5 = (known after apply) 2026-04-01 00:02:24.759243 | orchestrator | + content_sha1 = (known after apply) 2026-04-01 00:02:24.759247 | orchestrator | + content_sha256 = (known after apply) 2026-04-01 00:02:24.759251 | orchestrator | + content_sha512 = (known after apply) 2026-04-01 00:02:24.759255 | orchestrator | + directory_permission = "0777" 2026-04-01 00:02:24.759258 | orchestrator | + file_permission = "0644" 2026-04-01 00:02:24.759262 | orchestrator | + filename = "inventory.ci" 2026-04-01 00:02:24.759266 | orchestrator | + id = (known after apply) 2026-04-01 00:02:24.759269 | orchestrator | } 2026-04-01 00:02:24.759273 | orchestrator | 2026-04-01 00:02:24.759277 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-04-01 00:02:24.759281 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-04-01 00:02:24.759285 | orchestrator | + content = (sensitive value) 2026-04-01 00:02:24.759288 | orchestrator | + content_base64sha256 = (known after apply) 2026-04-01 00:02:24.759292 | orchestrator | + content_base64sha512 = (known after apply) 2026-04-01 00:02:24.759296 | orchestrator | + content_md5 = (known after apply) 2026-04-01 00:02:24.759300 | orchestrator | + content_sha1 = (known after apply) 2026-04-01 00:02:24.759304 | orchestrator | + content_sha256 = (known after apply) 2026-04-01 00:02:24.759307 | orchestrator | + content_sha512 = (known after apply) 2026-04-01 00:02:24.759311 | orchestrator | + directory_permission = "0700" 2026-04-01 00:02:24.759315 | orchestrator | + file_permission = "0600" 2026-04-01 00:02:24.759319 | orchestrator | + filename = ".id_rsa.ci" 2026-04-01 00:02:24.759323 | orchestrator | + id = (known after apply) 2026-04-01 00:02:24.759326 | orchestrator | } 2026-04-01 00:02:24.759332 | orchestrator | 2026-04-01 00:02:24.759336 | orchestrator | # null_resource.node_semaphore will be created 2026-04-01 00:02:24.759339 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-04-01 00:02:24.759343 | orchestrator | + id = (known after apply) 2026-04-01 00:02:24.759347 | orchestrator | } 2026-04-01 00:02:24.759351 | orchestrator | 2026-04-01 00:02:24.759355 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-04-01 00:02:24.759358 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-04-01 00:02:24.759362 | orchestrator | + attachment = (known after apply) 2026-04-01 00:02:24.759366 | orchestrator | + availability_zone = "nova" 2026-04-01 00:02:24.759370 | orchestrator | + id = (known after apply) 2026-04-01 00:02:24.759373 | orchestrator | + image_id = (known after apply) 2026-04-01 00:02:24.759377 | orchestrator | + metadata = (known after apply) 2026-04-01 00:02:24.759381 | orchestrator | + name = "testbed-volume-manager-base" 2026-04-01 00:02:24.759385 | orchestrator | + region = (known after apply) 2026-04-01 00:02:24.759389 | orchestrator | + size = 80 2026-04-01 00:02:24.759392 | orchestrator | + volume_retype_policy = "never" 2026-04-01 00:02:24.759396 | orchestrator | + volume_type = "ssd" 2026-04-01 00:02:24.759400 | orchestrator | } 2026-04-01 00:02:24.759404 | orchestrator | 2026-04-01 00:02:24.759407 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-04-01 00:02:24.759411 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-01 00:02:24.759415 | orchestrator | + attachment = (known after apply) 2026-04-01 00:02:24.759419 | orchestrator | + availability_zone = "nova" 2026-04-01 00:02:24.759423 | orchestrator | + id = (known after apply) 2026-04-01 00:02:24.759429 | orchestrator | + image_id = (known after apply) 2026-04-01 00:02:24.759433 | orchestrator | + metadata = (known after apply) 2026-04-01 00:02:24.759437 | orchestrator | + name = "testbed-volume-0-node-base" 2026-04-01 00:02:24.759441 | orchestrator | + region = (known after apply) 2026-04-01 00:02:24.759444 | orchestrator | + size = 80 2026-04-01 00:02:24.759448 | orchestrator | + volume_retype_policy = "never" 2026-04-01 00:02:24.759452 | orchestrator | + volume_type = "ssd" 2026-04-01 00:02:24.759456 | orchestrator | } 2026-04-01 00:02:24.759459 | orchestrator | 2026-04-01 00:02:24.759463 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-04-01 00:02:24.759467 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-01 00:02:24.759471 | orchestrator | + attachment = (known after apply) 2026-04-01 00:02:24.759475 | orchestrator | + availability_zone = "nova" 2026-04-01 00:02:24.759478 | orchestrator | + id = (known after apply) 2026-04-01 00:02:24.759482 | orchestrator | + image_id = (known after apply) 2026-04-01 00:02:24.759486 | orchestrator | + metadata = (known after apply) 2026-04-01 00:02:24.759490 | orchestrator | + name = "testbed-volume-1-node-base" 2026-04-01 00:02:24.759493 | orchestrator | + region = (known after apply) 2026-04-01 00:02:24.759497 | orchestrator | + size = 80 2026-04-01 00:02:24.759501 | orchestrator | + volume_retype_policy = "never" 2026-04-01 00:02:24.759505 | orchestrator | + volume_type = "ssd" 2026-04-01 00:02:24.759508 | orchestrator | } 2026-04-01 00:02:24.759514 | orchestrator | 2026-04-01 00:02:24.759518 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-04-01 00:02:24.759521 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-01 00:02:24.759525 | orchestrator | + attachment = (known after apply) 2026-04-01 00:02:24.759529 | orchestrator | + availability_zone = "nova" 2026-04-01 00:02:24.759533 | orchestrator | + id = (known after apply) 2026-04-01 00:02:24.759537 | orchestrator | + image_id = (known after apply) 2026-04-01 00:02:24.759540 | orchestrator | + metadata = (known after apply) 2026-04-01 00:02:24.759544 | orchestrator | + name = "testbed-volume-2-node-base" 2026-04-01 00:02:24.759548 | orchestrator | + region = (known after apply) 2026-04-01 00:02:24.759552 | orchestrator | + size = 80 2026-04-01 00:02:24.759555 | orchestrator | + volume_retype_policy = "never" 2026-04-01 00:02:24.759559 | orchestrator | + volume_type = "ssd" 2026-04-01 00:02:24.759563 | orchestrator | } 2026-04-01 00:02:24.759567 | orchestrator | 2026-04-01 00:02:24.759570 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-04-01 00:02:24.759574 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-01 00:02:24.759578 | orchestrator | + attachment = (known after apply) 2026-04-01 00:02:24.759582 | orchestrator | + availability_zone = "nova" 2026-04-01 00:02:24.759586 | orchestrator | + id = (known after apply) 2026-04-01 00:02:24.759589 | orchestrator | + image_id = (known after apply) 2026-04-01 00:02:24.759593 | orchestrator | + metadata = (known after apply) 2026-04-01 00:02:24.759599 | orchestrator | + name = "testbed-volume-3-node-base" 2026-04-01 00:02:24.759603 | orchestrator | + region = (known after apply) 2026-04-01 00:02:24.759607 | orchestrator | + size = 80 2026-04-01 00:02:24.759611 | orchestrator | + volume_retype_policy = "never" 2026-04-01 00:02:24.759614 | orchestrator | + volume_type = "ssd" 2026-04-01 00:02:24.759618 | orchestrator | } 2026-04-01 00:02:24.759622 | orchestrator | 2026-04-01 00:02:24.759626 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-04-01 00:02:24.759629 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-01 00:02:24.759633 | orchestrator | + attachment = (known after apply) 2026-04-01 00:02:24.759637 | orchestrator | + availability_zone = "nova" 2026-04-01 00:02:24.759641 | orchestrator | + id = (known after apply) 2026-04-01 00:02:24.759648 | orchestrator | + image_id = (known after apply) 2026-04-01 00:02:24.759652 | orchestrator | + metadata = (known after apply) 2026-04-01 00:02:24.759656 | orchestrator | + name = "testbed-volume-4-node-base" 2026-04-01 00:02:24.759660 | orchestrator | + region = (known after apply) 2026-04-01 00:02:24.759664 | orchestrator | + size = 80 2026-04-01 00:02:24.759667 | orchestrator | + volume_retype_policy = "never" 2026-04-01 00:02:24.759671 | orchestrator | + volume_type = "ssd" 2026-04-01 00:02:24.759675 | orchestrator | } 2026-04-01 00:02:24.759679 | orchestrator | 2026-04-01 00:02:24.759682 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-04-01 00:02:24.759686 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-01 00:02:24.759690 | orchestrator | + attachment = (known after apply) 2026-04-01 00:02:24.759694 | orchestrator | + availability_zone = "nova" 2026-04-01 00:02:24.759697 | orchestrator | + id = (known after apply) 2026-04-01 00:02:24.759701 | orchestrator | + image_id = (known after apply) 2026-04-01 00:02:24.759705 | orchestrator | + metadata = (known after apply) 2026-04-01 00:02:24.759718 | orchestrator | + name = "testbed-volume-5-node-base" 2026-04-01 00:02:24.759722 | orchestrator | + region = (known after apply) 2026-04-01 00:02:24.759726 | orchestrator | + size = 80 2026-04-01 00:02:24.759730 | orchestrator | + volume_retype_policy = "never" 2026-04-01 00:02:24.759733 | orchestrator | + volume_type = "ssd" 2026-04-01 00:02:24.759737 | orchestrator | } 2026-04-01 00:02:24.759743 | orchestrator | 2026-04-01 00:02:24.759746 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-04-01 00:02:24.759751 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-01 00:02:24.759754 | orchestrator | + attachment = (known after apply) 2026-04-01 00:02:24.759758 | orchestrator | + availability_zone = "nova" 2026-04-01 00:02:24.759762 | orchestrator | + id = (known after apply) 2026-04-01 00:02:24.759766 | orchestrator | + metadata = (known after apply) 2026-04-01 00:02:24.759769 | orchestrator | + name = "testbed-volume-0-node-3" 2026-04-01 00:02:24.759773 | orchestrator | + region = (known after apply) 2026-04-01 00:02:24.759777 | orchestrator | + size = 20 2026-04-01 00:02:24.759781 | orchestrator | + volume_retype_policy = "never" 2026-04-01 00:02:24.759784 | orchestrator | + volume_type = "ssd" 2026-04-01 00:02:24.759788 | orchestrator | } 2026-04-01 00:02:24.759792 | orchestrator | 2026-04-01 00:02:24.759796 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-04-01 00:02:24.759799 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-01 00:02:24.759803 | orchestrator | + attachment = (known after apply) 2026-04-01 00:02:24.759807 | orchestrator | + availability_zone = "nova" 2026-04-01 00:02:24.759811 | orchestrator | + id = (known after apply) 2026-04-01 00:02:24.759814 | orchestrator | + metadata = (known after apply) 2026-04-01 00:02:24.759818 | orchestrator | + name = "testbed-volume-1-node-4" 2026-04-01 00:02:24.759822 | orchestrator | + region = (known after apply) 2026-04-01 00:02:24.759826 | orchestrator | + size = 20 2026-04-01 00:02:24.759829 | orchestrator | + volume_retype_policy = "never" 2026-04-01 00:02:24.759833 | orchestrator | + volume_type = "ssd" 2026-04-01 00:02:24.759837 | orchestrator | } 2026-04-01 00:02:24.759841 | orchestrator | 2026-04-01 00:02:24.759844 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-04-01 00:02:24.759848 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-01 00:02:24.759852 | orchestrator | + attachment = (known after apply) 2026-04-01 00:02:24.759856 | orchestrator | + availability_zone = "nova" 2026-04-01 00:02:24.759860 | orchestrator | + id = (known after apply) 2026-04-01 00:02:24.759863 | orchestrator | + metadata = (known after apply) 2026-04-01 00:02:24.759867 | orchestrator | + name = "testbed-volume-2-node-5" 2026-04-01 00:02:24.759871 | orchestrator | + region = (known after apply) 2026-04-01 00:02:24.759878 | orchestrator | + size = 20 2026-04-01 00:02:24.759882 | orchestrator | + volume_retype_policy = "never" 2026-04-01 00:02:24.759886 | orchestrator | + volume_type = "ssd" 2026-04-01 00:02:24.759890 | orchestrator | } 2026-04-01 00:02:24.759893 | orchestrator | 2026-04-01 00:02:24.759897 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-04-01 00:02:24.759901 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-01 00:02:24.759905 | orchestrator | + attachment = (known after apply) 2026-04-01 00:02:24.759908 | orchestrator | + availability_zone = "nova" 2026-04-01 00:02:24.759912 | orchestrator | + id = (known after apply) 2026-04-01 00:02:24.759916 | orchestrator | + metadata = (known after apply) 2026-04-01 00:02:24.759919 | orchestrator | + name = "testbed-volume-3-node-3" 2026-04-01 00:02:24.759923 | orchestrator | + region = (known after apply) 2026-04-01 00:02:24.759927 | orchestrator | + size = 20 2026-04-01 00:02:24.759930 | orchestrator | + volume_retype_policy = "never" 2026-04-01 00:02:24.759934 | orchestrator | + volume_type = "ssd" 2026-04-01 00:02:24.759938 | orchestrator | } 2026-04-01 00:02:24.759942 | orchestrator | 2026-04-01 00:02:24.759945 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-04-01 00:02:24.759949 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-01 00:02:24.759953 | orchestrator | + attachment = (known after apply) 2026-04-01 00:02:24.759957 | orchestrator | + availability_zone = "nova" 2026-04-01 00:02:24.759960 | orchestrator | + id = (known after apply) 2026-04-01 00:02:24.759964 | orchestrator | + metadata = (known after apply) 2026-04-01 00:02:24.759968 | orchestrator | + name = "testbed-volume-4-node-4" 2026-04-01 00:02:24.759972 | orchestrator | + region = (known after apply) 2026-04-01 00:02:24.759978 | orchestrator | + size = 20 2026-04-01 00:02:24.759982 | orchestrator | + volume_retype_policy = "never" 2026-04-01 00:02:24.759985 | orchestrator | + volume_type = "ssd" 2026-04-01 00:02:24.759989 | orchestrator | } 2026-04-01 00:02:24.760005 | orchestrator | 2026-04-01 00:02:24.760009 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-04-01 00:02:24.760013 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-01 00:02:24.760016 | orchestrator | + attachment = (known after apply) 2026-04-01 00:02:24.760020 | orchestrator | + availability_zone = "nova" 2026-04-01 00:02:24.760024 | orchestrator | + id = (known after apply) 2026-04-01 00:02:24.760028 | orchestrator | + metadata = (known after apply) 2026-04-01 00:02:24.760031 | orchestrator | + name = "testbed-volume-5-node-5" 2026-04-01 00:02:24.760035 | orchestrator | + region = (known after apply) 2026-04-01 00:02:24.760039 | orchestrator | + size = 20 2026-04-01 00:02:24.760043 | orchestrator | + volume_retype_policy = "never" 2026-04-01 00:02:24.760046 | orchestrator | + volume_type = "ssd" 2026-04-01 00:02:24.760050 | orchestrator | } 2026-04-01 00:02:24.760056 | orchestrator | 2026-04-01 00:02:24.760060 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-04-01 00:02:24.760063 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-01 00:02:24.760067 | orchestrator | + attachment = (known after apply) 2026-04-01 00:02:24.760071 | orchestrator | + availability_zone = "nova" 2026-04-01 00:02:24.760075 | orchestrator | + id = (known after apply) 2026-04-01 00:02:24.760078 | orchestrator | + metadata = (known after apply) 2026-04-01 00:02:24.760082 | orchestrator | + name = "testbed-volume-6-node-3" 2026-04-01 00:02:24.760086 | orchestrator | + region = (known after apply) 2026-04-01 00:02:24.760090 | orchestrator | + size = 20 2026-04-01 00:02:24.760093 | orchestrator | + volume_retype_policy = "never" 2026-04-01 00:02:24.760097 | orchestrator | + volume_type = "ssd" 2026-04-01 00:02:24.760101 | orchestrator | } 2026-04-01 00:02:24.760105 | orchestrator | 2026-04-01 00:02:24.760108 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-04-01 00:02:24.760112 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-01 00:02:24.760120 | orchestrator | + attachment = (known after apply) 2026-04-01 00:02:24.760124 | orchestrator | + availability_zone = "nova" 2026-04-01 00:02:24.760127 | orchestrator | + id = (known after apply) 2026-04-01 00:02:24.760131 | orchestrator | + metadata = (known after apply) 2026-04-01 00:02:24.760135 | orchestrator | + name = "testbed-volume-7-node-4" 2026-04-01 00:02:24.760139 | orchestrator | + region = (known after apply) 2026-04-01 00:02:24.760142 | orchestrator | + size = 20 2026-04-01 00:02:24.760146 | orchestrator | + volume_retype_policy = "never" 2026-04-01 00:02:24.760150 | orchestrator | + volume_type = "ssd" 2026-04-01 00:02:24.760154 | orchestrator | } 2026-04-01 00:02:24.760158 | orchestrator | 2026-04-01 00:02:24.760161 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-04-01 00:02:24.760165 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-01 00:02:24.760169 | orchestrator | + attachment = (known after apply) 2026-04-01 00:02:24.760172 | orchestrator | + availability_zone = "nova" 2026-04-01 00:02:24.760176 | orchestrator | + id = (known after apply) 2026-04-01 00:02:24.760180 | orchestrator | + metadata = (known after apply) 2026-04-01 00:02:24.760184 | orchestrator | + name = "testbed-volume-8-node-5" 2026-04-01 00:02:24.760187 | orchestrator | + region = (known after apply) 2026-04-01 00:02:24.760191 | orchestrator | + size = 20 2026-04-01 00:02:24.760195 | orchestrator | + volume_retype_policy = "never" 2026-04-01 00:02:24.760199 | orchestrator | + volume_type = "ssd" 2026-04-01 00:02:24.760202 | orchestrator | } 2026-04-01 00:02:24.760206 | orchestrator | 2026-04-01 00:02:24.760210 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-04-01 00:02:24.760214 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-04-01 00:02:24.760217 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-01 00:02:24.760221 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-01 00:02:24.760225 | orchestrator | + all_metadata = (known after apply) 2026-04-01 00:02:24.760228 | orchestrator | + all_tags = (known after apply) 2026-04-01 00:02:24.760232 | orchestrator | + availability_zone = "nova" 2026-04-01 00:02:24.760236 | orchestrator | + config_drive = true 2026-04-01 00:02:24.760240 | orchestrator | + created = (known after apply) 2026-04-01 00:02:24.760243 | orchestrator | + flavor_id = (known after apply) 2026-04-01 00:02:24.760247 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-04-01 00:02:24.760251 | orchestrator | + force_delete = false 2026-04-01 00:02:24.760255 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-01 00:02:24.760258 | orchestrator | + id = (known after apply) 2026-04-01 00:02:24.760262 | orchestrator | + image_id = (known after apply) 2026-04-01 00:02:24.760266 | orchestrator | + image_name = (known after apply) 2026-04-01 00:02:24.760270 | orchestrator | + key_pair = "testbed" 2026-04-01 00:02:24.760273 | orchestrator | + name = "testbed-manager" 2026-04-01 00:02:24.760277 | orchestrator | + power_state = "active" 2026-04-01 00:02:24.760281 | orchestrator | + region = (known after apply) 2026-04-01 00:02:24.760285 | orchestrator | + security_groups = (known after apply) 2026-04-01 00:02:24.760288 | orchestrator | + stop_before_destroy = false 2026-04-01 00:02:24.760292 | orchestrator | + updated = (known after apply) 2026-04-01 00:02:24.760296 | orchestrator | + user_data = (sensitive value) 2026-04-01 00:02:24.760299 | orchestrator | 2026-04-01 00:02:24.760303 | orchestrator | + block_device { 2026-04-01 00:02:24.760307 | orchestrator | + boot_index = 0 2026-04-01 00:02:24.760311 | orchestrator | + delete_on_termination = false 2026-04-01 00:02:24.760317 | orchestrator | + destination_type = "volume" 2026-04-01 00:02:24.760321 | orchestrator | + multiattach = false 2026-04-01 00:02:24.760325 | orchestrator | + source_type = "volume" 2026-04-01 00:02:24.760328 | orchestrator | + uuid = (known after apply) 2026-04-01 00:02:24.760335 | orchestrator | } 2026-04-01 00:02:24.760339 | orchestrator | 2026-04-01 00:02:24.760342 | orchestrator | + network { 2026-04-01 00:02:24.760346 | orchestrator | + access_network = false 2026-04-01 00:02:24.760350 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-01 00:02:24.760353 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-01 00:02:24.760357 | orchestrator | + mac = (known after apply) 2026-04-01 00:02:24.760361 | orchestrator | + name = (known after apply) 2026-04-01 00:02:24.760365 | orchestrator | + port = (known after apply) 2026-04-01 00:02:24.760368 | orchestrator | + uuid = (known after apply) 2026-04-01 00:02:24.760372 | orchestrator | } 2026-04-01 00:02:24.760376 | orchestrator | } 2026-04-01 00:02:24.760381 | orchestrator | 2026-04-01 00:02:24.760385 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-04-01 00:02:24.760389 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-01 00:02:24.760393 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-01 00:02:24.760397 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-01 00:02:24.760400 | orchestrator | + all_metadata = (known after apply) 2026-04-01 00:02:24.760404 | orchestrator | + all_tags = (known after apply) 2026-04-01 00:02:24.760408 | orchestrator | + availability_zone = "nova" 2026-04-01 00:02:24.760412 | orchestrator | + config_drive = true 2026-04-01 00:02:24.760415 | orchestrator | + created = (known after apply) 2026-04-01 00:02:24.760419 | orchestrator | + flavor_id = (known after apply) 2026-04-01 00:02:24.760423 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-01 00:02:24.760426 | orchestrator | + force_delete = false 2026-04-01 00:02:24.760430 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-01 00:02:24.760434 | orchestrator | + id = (known after apply) 2026-04-01 00:02:24.760438 | orchestrator | + image_id = (known after apply) 2026-04-01 00:02:24.760441 | orchestrator | + image_name = (known after apply) 2026-04-01 00:02:24.760445 | orchestrator | + key_pair = "testbed" 2026-04-01 00:02:24.760449 | orchestrator | + name = "testbed-node-0" 2026-04-01 00:02:24.760453 | orchestrator | + power_state = "active" 2026-04-01 00:02:24.760456 | orchestrator | + region = (known after apply) 2026-04-01 00:02:24.760460 | orchestrator | + security_groups = (known after apply) 2026-04-01 00:02:24.760464 | orchestrator | + stop_before_destroy = false 2026-04-01 00:02:24.760467 | orchestrator | + updated = (known after apply) 2026-04-01 00:02:24.760471 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-01 00:02:24.760475 | orchestrator | 2026-04-01 00:02:24.760479 | orchestrator | + block_device { 2026-04-01 00:02:24.760483 | orchestrator | + boot_index = 0 2026-04-01 00:02:24.760486 | orchestrator | + delete_on_termination = false 2026-04-01 00:02:24.760490 | orchestrator | + destination_type = "volume" 2026-04-01 00:02:24.760494 | orchestrator | + multiattach = false 2026-04-01 00:02:24.760497 | orchestrator | + source_type = "volume" 2026-04-01 00:02:24.760501 | orchestrator | + uuid = (known after apply) 2026-04-01 00:02:24.760505 | orchestrator | } 2026-04-01 00:02:24.760509 | orchestrator | 2026-04-01 00:02:24.760512 | orchestrator | + network { 2026-04-01 00:02:24.760516 | orchestrator | + access_network = false 2026-04-01 00:02:24.760520 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-01 00:02:24.760523 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-01 00:02:24.760527 | orchestrator | + mac = (known after apply) 2026-04-01 00:02:24.760531 | orchestrator | + name = (known after apply) 2026-04-01 00:02:24.760535 | orchestrator | + port = (known after apply) 2026-04-01 00:02:24.760538 | orchestrator | + uuid = (known after apply) 2026-04-01 00:02:24.760542 | orchestrator | } 2026-04-01 00:02:24.760546 | orchestrator | } 2026-04-01 00:02:24.760550 | orchestrator | 2026-04-01 00:02:24.760553 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-04-01 00:02:24.760557 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-01 00:02:24.760561 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-01 00:02:24.760567 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-01 00:02:24.760571 | orchestrator | + all_metadata = (known after apply) 2026-04-01 00:02:24.760575 | orchestrator | + all_tags = (known after apply) 2026-04-01 00:02:24.760579 | orchestrator | + availability_zone = "nova" 2026-04-01 00:02:24.760582 | orchestrator | + config_drive = true 2026-04-01 00:02:24.760586 | orchestrator | + created = (known after apply) 2026-04-01 00:02:24.760590 | orchestrator | + flavor_id = (known after apply) 2026-04-01 00:02:24.760594 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-01 00:02:24.760597 | orchestrator | + force_delete = false 2026-04-01 00:02:24.760601 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-01 00:02:24.760605 | orchestrator | + id = (known after apply) 2026-04-01 00:02:24.760608 | orchestrator | + image_id = (known after apply) 2026-04-01 00:02:24.760612 | orchestrator | + image_name = (known after apply) 2026-04-01 00:02:24.760616 | orchestrator | + key_pair = "testbed" 2026-04-01 00:02:24.760620 | orchestrator | + name = "testbed-node-1" 2026-04-01 00:02:24.760623 | orchestrator | + power_state = "active" 2026-04-01 00:02:24.760627 | orchestrator | + region = (known after apply) 2026-04-01 00:02:24.760631 | orchestrator | + security_groups = (known after apply) 2026-04-01 00:02:24.760635 | orchestrator | + stop_before_destroy = false 2026-04-01 00:02:24.760639 | orchestrator | + updated = (known after apply) 2026-04-01 00:02:24.760642 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-01 00:02:24.760646 | orchestrator | 2026-04-01 00:02:24.760650 | orchestrator | + block_device { 2026-04-01 00:02:24.760654 | orchestrator | + boot_index = 0 2026-04-01 00:02:24.760657 | orchestrator | + delete_on_termination = false 2026-04-01 00:02:24.760661 | orchestrator | + destination_type = "volume" 2026-04-01 00:02:24.760665 | orchestrator | + multiattach = false 2026-04-01 00:02:24.760669 | orchestrator | + source_type = "volume" 2026-04-01 00:02:24.760672 | orchestrator | + uuid = (known after apply) 2026-04-01 00:02:24.760676 | orchestrator | } 2026-04-01 00:02:24.760680 | orchestrator | 2026-04-01 00:02:24.760683 | orchestrator | + network { 2026-04-01 00:02:24.760687 | orchestrator | + access_network = false 2026-04-01 00:02:24.760691 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-01 00:02:24.760695 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-01 00:02:24.760698 | orchestrator | + mac = (known after apply) 2026-04-01 00:02:24.760702 | orchestrator | + name = (known after apply) 2026-04-01 00:02:24.760706 | orchestrator | + port = (known after apply) 2026-04-01 00:02:24.760710 | orchestrator | + uuid = (known after apply) 2026-04-01 00:02:24.760713 | orchestrator | } 2026-04-01 00:02:24.760717 | orchestrator | } 2026-04-01 00:02:24.760723 | orchestrator | 2026-04-01 00:02:24.760727 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-04-01 00:02:24.760730 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-01 00:02:24.760734 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-01 00:02:24.760738 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-01 00:02:24.760742 | orchestrator | + all_metadata = (known after apply) 2026-04-01 00:02:24.760746 | orchestrator | + all_tags = (known after apply) 2026-04-01 00:02:24.760752 | orchestrator | + availability_zone = "nova" 2026-04-01 00:02:24.760756 | orchestrator | + config_drive = true 2026-04-01 00:02:24.760760 | orchestrator | + created = (known after apply) 2026-04-01 00:02:24.760763 | orchestrator | + flavor_id = (known after apply) 2026-04-01 00:02:24.760767 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-01 00:02:24.760771 | orchestrator | + force_delete = false 2026-04-01 00:02:24.760775 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-01 00:02:24.760778 | orchestrator | + id = (known after apply) 2026-04-01 00:02:24.760782 | orchestrator | + image_id = (known after apply) 2026-04-01 00:02:24.760792 | orchestrator | + image_name = (known after apply) 2026-04-01 00:02:24.760796 | orchestrator | + key_pair = "testbed" 2026-04-01 00:02:24.760799 | orchestrator | + name = "testbed-node-2" 2026-04-01 00:02:24.760803 | orchestrator | + power_state = "active" 2026-04-01 00:02:24.760807 | orchestrator | + region = (known after apply) 2026-04-01 00:02:24.760810 | orchestrator | + security_groups = (known after apply) 2026-04-01 00:02:24.760814 | orchestrator | + stop_before_destroy = false 2026-04-01 00:02:24.760818 | orchestrator | + updated = (known after apply) 2026-04-01 00:02:24.760821 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-01 00:02:24.760825 | orchestrator | 2026-04-01 00:02:24.760829 | orchestrator | + block_device { 2026-04-01 00:02:24.760833 | orchestrator | + boot_index = 0 2026-04-01 00:02:24.760836 | orchestrator | + delete_on_termination = false 2026-04-01 00:02:24.760840 | orchestrator | + destination_type = "volume" 2026-04-01 00:02:24.760844 | orchestrator | + multiattach = false 2026-04-01 00:02:24.760847 | orchestrator | + source_type = "volume" 2026-04-01 00:02:24.760851 | orchestrator | + uuid = (known after apply) 2026-04-01 00:02:24.760855 | orchestrator | } 2026-04-01 00:02:24.760859 | orchestrator | 2026-04-01 00:02:24.760862 | orchestrator | + network { 2026-04-01 00:02:24.760866 | orchestrator | + access_network = false 2026-04-01 00:02:24.760870 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-01 00:02:24.760873 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-01 00:02:24.760877 | orchestrator | + mac = (known after apply) 2026-04-01 00:02:24.760881 | orchestrator | + name = (known after apply) 2026-04-01 00:02:24.760884 | orchestrator | + port = (known after apply) 2026-04-01 00:02:24.760888 | orchestrator | + uuid = (known after apply) 2026-04-01 00:02:24.760892 | orchestrator | } 2026-04-01 00:02:24.760895 | orchestrator | } 2026-04-01 00:02:24.760899 | orchestrator | 2026-04-01 00:02:24.760903 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-04-01 00:02:24.760907 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-01 00:02:24.760910 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-01 00:02:24.760914 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-01 00:02:24.760918 | orchestrator | + all_metadata = (known after apply) 2026-04-01 00:02:24.760921 | orchestrator | + all_tags = (known after apply) 2026-04-01 00:02:24.760925 | orchestrator | + availability_zone = "nova" 2026-04-01 00:02:24.760929 | orchestrator | + config_drive = true 2026-04-01 00:02:24.760932 | orchestrator | + created = (known after apply) 2026-04-01 00:02:24.760936 | orchestrator | + flavor_id = (known after apply) 2026-04-01 00:02:24.760940 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-01 00:02:24.760943 | orchestrator | + force_delete = false 2026-04-01 00:02:24.760947 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-01 00:02:24.760951 | orchestrator | + id = (known after apply) 2026-04-01 00:02:24.760955 | orchestrator | + image_id = (known after apply) 2026-04-01 00:02:24.760958 | orchestrator | + image_name = (known after apply) 2026-04-01 00:02:24.760962 | orchestrator | + key_pair = "testbed" 2026-04-01 00:02:24.760966 | orchestrator | + name = "testbed-node-3" 2026-04-01 00:02:24.760970 | orchestrator | + power_state = "active" 2026-04-01 00:02:24.760973 | orchestrator | + region = (known after apply) 2026-04-01 00:02:24.760977 | orchestrator | + security_groups = (known after apply) 2026-04-01 00:02:24.760981 | orchestrator | + stop_before_destroy = false 2026-04-01 00:02:24.760984 | orchestrator | + updated = (known after apply) 2026-04-01 00:02:24.760988 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-01 00:02:24.761004 | orchestrator | 2026-04-01 00:02:24.761008 | orchestrator | + block_device { 2026-04-01 00:02:24.761014 | orchestrator | + boot_index = 0 2026-04-01 00:02:24.761017 | orchestrator | + delete_on_termination = false 2026-04-01 00:02:24.761021 | orchestrator | + destination_type = "volume" 2026-04-01 00:02:24.761028 | orchestrator | + multiattach = false 2026-04-01 00:02:24.761032 | orchestrator | + source_type = "volume" 2026-04-01 00:02:24.761036 | orchestrator | + uuid = (known after apply) 2026-04-01 00:02:24.761039 | orchestrator | } 2026-04-01 00:02:24.761043 | orchestrator | 2026-04-01 00:02:24.761047 | orchestrator | + network { 2026-04-01 00:02:24.761051 | orchestrator | + access_network = false 2026-04-01 00:02:24.761054 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-01 00:02:24.761058 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-01 00:02:24.761062 | orchestrator | + mac = (known after apply) 2026-04-01 00:02:24.761066 | orchestrator | + name = (known after apply) 2026-04-01 00:02:24.761069 | orchestrator | + port = (known after apply) 2026-04-01 00:02:24.761073 | orchestrator | + uuid = (known after apply) 2026-04-01 00:02:24.761077 | orchestrator | } 2026-04-01 00:02:24.761080 | orchestrator | } 2026-04-01 00:02:24.761086 | orchestrator | 2026-04-01 00:02:24.761090 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-04-01 00:02:24.761094 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-01 00:02:24.761098 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-01 00:02:24.761101 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-01 00:02:24.761105 | orchestrator | + all_metadata = (known after apply) 2026-04-01 00:02:24.761109 | orchestrator | + all_tags = (known after apply) 2026-04-01 00:02:24.761113 | orchestrator | + availability_zone = "nova" 2026-04-01 00:02:24.761116 | orchestrator | + config_drive = true 2026-04-01 00:02:24.761120 | orchestrator | + created = (known after apply) 2026-04-01 00:02:24.761124 | orchestrator | + flavor_id = (known after apply) 2026-04-01 00:02:24.761127 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-01 00:02:24.761131 | orchestrator | + force_delete = false 2026-04-01 00:02:24.761135 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-01 00:02:24.761138 | orchestrator | + id = (known after apply) 2026-04-01 00:02:24.761142 | orchestrator | + image_id = (known after apply) 2026-04-01 00:02:24.761146 | orchestrator | + image_name = (known after apply) 2026-04-01 00:02:24.761150 | orchestrator | + key_pair = "testbed" 2026-04-01 00:02:24.761153 | orchestrator | + name = "testbed-node-4" 2026-04-01 00:02:24.761157 | orchestrator | + power_state = "active" 2026-04-01 00:02:24.761161 | orchestrator | + region = (known after apply) 2026-04-01 00:02:24.761164 | orchestrator | + security_groups = (known after apply) 2026-04-01 00:02:24.761168 | orchestrator | + stop_before_destroy = false 2026-04-01 00:02:24.761172 | orchestrator | + updated = (known after apply) 2026-04-01 00:02:24.761175 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-01 00:02:24.761179 | orchestrator | 2026-04-01 00:02:24.761183 | orchestrator | + block_device { 2026-04-01 00:02:24.761187 | orchestrator | + boot_index = 0 2026-04-01 00:02:24.761190 | orchestrator | + delete_on_termination = false 2026-04-01 00:02:24.761194 | orchestrator | + destination_type = "volume" 2026-04-01 00:02:24.761198 | orchestrator | + multiattach = false 2026-04-01 00:02:24.761201 | orchestrator | + source_type = "volume" 2026-04-01 00:02:24.761205 | orchestrator | + uuid = (known after apply) 2026-04-01 00:02:24.761209 | orchestrator | } 2026-04-01 00:02:24.761213 | orchestrator | 2026-04-01 00:02:24.761216 | orchestrator | + network { 2026-04-01 00:02:24.761220 | orchestrator | + access_network = false 2026-04-01 00:02:24.761224 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-01 00:02:24.761227 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-01 00:02:24.761231 | orchestrator | + mac = (known after apply) 2026-04-01 00:02:24.761235 | orchestrator | + name = (known after apply) 2026-04-01 00:02:24.761239 | orchestrator | + port = (known after apply) 2026-04-01 00:02:24.761242 | orchestrator | + uuid = (known after apply) 2026-04-01 00:02:24.761246 | orchestrator | } 2026-04-01 00:02:24.761250 | orchestrator | } 2026-04-01 00:02:24.761256 | orchestrator | 2026-04-01 00:02:24.761260 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-04-01 00:02:24.761264 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-01 00:02:24.761267 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-01 00:02:24.761271 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-01 00:02:24.761275 | orchestrator | + all_metadata = (known after apply) 2026-04-01 00:02:24.761279 | orchestrator | + all_tags = (known after apply) 2026-04-01 00:02:24.761282 | orchestrator | + availability_zone = "nova" 2026-04-01 00:02:24.761286 | orchestrator | + config_drive = true 2026-04-01 00:02:24.761290 | orchestrator | + created = (known after apply) 2026-04-01 00:02:24.761293 | orchestrator | + flavor_id = (known after apply) 2026-04-01 00:02:24.761297 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-01 00:02:24.761301 | orchestrator | + force_delete = false 2026-04-01 00:02:24.761307 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-01 00:02:24.761311 | orchestrator | + id = (known after apply) 2026-04-01 00:02:24.761315 | orchestrator | + image_id = (known after apply) 2026-04-01 00:02:24.761318 | orchestrator | + image_name = (known after apply) 2026-04-01 00:02:24.761322 | orchestrator | + key_pair = "testbed" 2026-04-01 00:02:24.761326 | orchestrator | + name = "testbed-node-5" 2026-04-01 00:02:24.761330 | orchestrator | + power_state = "active" 2026-04-01 00:02:24.761333 | orchestrator | + region = (known after apply) 2026-04-01 00:02:24.761337 | orchestrator | + security_groups = (known after apply) 2026-04-01 00:02:24.761341 | orchestrator | + stop_before_destroy = false 2026-04-01 00:02:24.761344 | orchestrator | + updated = (known after apply) 2026-04-01 00:02:24.761348 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-01 00:02:24.761352 | orchestrator | 2026-04-01 00:02:24.761355 | orchestrator | + block_device { 2026-04-01 00:02:24.761359 | orchestrator | + boot_index = 0 2026-04-01 00:02:24.761363 | orchestrator | + delete_on_termination = false 2026-04-01 00:02:24.761366 | orchestrator | + destination_type = "volume" 2026-04-01 00:02:24.761370 | orchestrator | + multiattach = false 2026-04-01 00:02:24.761374 | orchestrator | + source_type = "volume" 2026-04-01 00:02:24.761378 | orchestrator | + uuid = (known after apply) 2026-04-01 00:02:24.761381 | orchestrator | } 2026-04-01 00:02:24.761385 | orchestrator | 2026-04-01 00:02:24.761389 | orchestrator | + network { 2026-04-01 00:02:24.761392 | orchestrator | + access_network = false 2026-04-01 00:02:24.761396 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-01 00:02:24.761400 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-01 00:02:24.761404 | orchestrator | + mac = (known after apply) 2026-04-01 00:02:24.761407 | orchestrator | + name = (known after apply) 2026-04-01 00:02:24.761411 | orchestrator | + port = (known after apply) 2026-04-01 00:02:24.761415 | orchestrator | + uuid = (known after apply) 2026-04-01 00:02:24.761418 | orchestrator | } 2026-04-01 00:02:24.761422 | orchestrator | } 2026-04-01 00:02:24.761426 | orchestrator | 2026-04-01 00:02:24.761430 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-04-01 00:02:24.761433 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-04-01 00:02:24.761437 | orchestrator | + fingerprint = (known after apply) 2026-04-01 00:02:24.761441 | orchestrator | + id = (known after apply) 2026-04-01 00:02:24.761444 | orchestrator | + name = "testbed" 2026-04-01 00:02:24.761448 | orchestrator | + private_key = (sensitive value) 2026-04-01 00:02:24.761452 | orchestrator | + public_key = (known after apply) 2026-04-01 00:02:24.761455 | orchestrator | + region = (known after apply) 2026-04-01 00:02:24.761459 | orchestrator | + user_id = (known after apply) 2026-04-01 00:02:24.761463 | orchestrator | } 2026-04-01 00:02:24.761469 | orchestrator | 2026-04-01 00:02:24.761473 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-04-01 00:02:24.761477 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-01 00:02:24.761483 | orchestrator | + device = (known after apply) 2026-04-01 00:02:24.761487 | orchestrator | + id = (known after apply) 2026-04-01 00:02:24.761491 | orchestrator | + instance_id = (known after apply) 2026-04-01 00:02:24.761494 | orchestrator | + region = (known after apply) 2026-04-01 00:02:24.761498 | orchestrator | + volume_id = (known after apply) 2026-04-01 00:02:24.761502 | orchestrator | } 2026-04-01 00:02:24.761506 | orchestrator | 2026-04-01 00:02:24.761509 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-04-01 00:02:24.761513 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-01 00:02:24.761517 | orchestrator | + device = (known after apply) 2026-04-01 00:02:24.761521 | orchestrator | + id = (known after apply) 2026-04-01 00:02:24.761524 | orchestrator | + instance_id = (known after apply) 2026-04-01 00:02:24.761528 | orchestrator | + region = (known after apply) 2026-04-01 00:02:24.761532 | orchestrator | + volume_id = (known after apply) 2026-04-01 00:02:24.761535 | orchestrator | } 2026-04-01 00:02:24.761539 | orchestrator | 2026-04-01 00:02:24.761543 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-04-01 00:02:24.761547 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-01 00:02:24.761550 | orchestrator | + device = (known after apply) 2026-04-01 00:02:24.761554 | orchestrator | + id = (known after apply) 2026-04-01 00:02:24.761558 | orchestrator | + instance_id = (known after apply) 2026-04-01 00:02:24.761561 | orchestrator | + region = (known after apply) 2026-04-01 00:02:24.761565 | orchestrator | + volume_id = (known after apply) 2026-04-01 00:02:24.761569 | orchestrator | } 2026-04-01 00:02:24.761573 | orchestrator | 2026-04-01 00:02:24.761576 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-04-01 00:02:24.761580 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-01 00:02:24.761584 | orchestrator | + device = (known after apply) 2026-04-01 00:02:24.761588 | orchestrator | + id = (known after apply) 2026-04-01 00:02:24.761591 | orchestrator | + instance_id = (known after apply) 2026-04-01 00:02:24.761595 | orchestrator | + region = (known after apply) 2026-04-01 00:02:24.761599 | orchestrator | + volume_id = (known after apply) 2026-04-01 00:02:24.761603 | orchestrator | } 2026-04-01 00:02:24.761606 | orchestrator | 2026-04-01 00:02:24.761610 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-04-01 00:02:24.761614 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-01 00:02:24.761618 | orchestrator | + device = (known after apply) 2026-04-01 00:02:24.761621 | orchestrator | + id = (known after apply) 2026-04-01 00:02:24.761625 | orchestrator | + instance_id = (known after apply) 2026-04-01 00:02:24.761631 | orchestrator | + region = (known after apply) 2026-04-01 00:02:24.761635 | orchestrator | + volume_id = (known after apply) 2026-04-01 00:02:24.761639 | orchestrator | } 2026-04-01 00:02:24.761643 | orchestrator | 2026-04-01 00:02:24.761646 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-04-01 00:02:24.761650 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-01 00:02:24.761654 | orchestrator | + device = (known after apply) 2026-04-01 00:02:24.761658 | orchestrator | + id = (known after apply) 2026-04-01 00:02:24.761661 | orchestrator | + instance_id = (known after apply) 2026-04-01 00:02:24.761665 | orchestrator | + region = (known after apply) 2026-04-01 00:02:24.761669 | orchestrator | + volume_id = (known after apply) 2026-04-01 00:02:24.761673 | orchestrator | } 2026-04-01 00:02:24.761676 | orchestrator | 2026-04-01 00:02:24.761680 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-04-01 00:02:24.761684 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-01 00:02:24.761688 | orchestrator | + device = (known after apply) 2026-04-01 00:02:24.761691 | orchestrator | + id = (known after apply) 2026-04-01 00:02:24.761695 | orchestrator | + instance_id = (known after apply) 2026-04-01 00:02:24.761699 | orchestrator | + region = (known after apply) 2026-04-01 00:02:24.761715 | orchestrator | + volume_id = (known after apply) 2026-04-01 00:02:24.761719 | orchestrator | } 2026-04-01 00:02:24.761723 | orchestrator | 2026-04-01 00:02:24.761727 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-04-01 00:02:24.761730 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-01 00:02:24.761734 | orchestrator | + device = (known after apply) 2026-04-01 00:02:24.761738 | orchestrator | + id = (known after apply) 2026-04-01 00:02:24.761742 | orchestrator | + instance_id = (known after apply) 2026-04-01 00:02:24.761746 | orchestrator | + region = (known after apply) 2026-04-01 00:02:24.761749 | orchestrator | + volume_id = (known after apply) 2026-04-01 00:02:24.761753 | orchestrator | } 2026-04-01 00:02:24.761757 | orchestrator | 2026-04-01 00:02:24.761761 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-04-01 00:02:24.761764 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-01 00:02:24.761768 | orchestrator | + device = (known after apply) 2026-04-01 00:02:24.761772 | orchestrator | + id = (known after apply) 2026-04-01 00:02:24.761776 | orchestrator | + instance_id = (known after apply) 2026-04-01 00:02:24.761779 | orchestrator | + region = (known after apply) 2026-04-01 00:02:24.761783 | orchestrator | + volume_id = (known after apply) 2026-04-01 00:02:24.761787 | orchestrator | } 2026-04-01 00:02:24.761791 | orchestrator | 2026-04-01 00:02:24.761794 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-04-01 00:02:24.761799 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-04-01 00:02:24.761803 | orchestrator | + fixed_ip = (known after apply) 2026-04-01 00:02:24.761806 | orchestrator | + floating_ip = (known after apply) 2026-04-01 00:02:24.761810 | orchestrator | + id = (known after apply) 2026-04-01 00:02:24.761814 | orchestrator | + port_id = (known after apply) 2026-04-01 00:02:24.761817 | orchestrator | + region = (known after apply) 2026-04-01 00:02:24.761821 | orchestrator | } 2026-04-01 00:02:24.761825 | orchestrator | 2026-04-01 00:02:24.761829 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-04-01 00:02:24.761832 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-04-01 00:02:24.761836 | orchestrator | + address = (known after apply) 2026-04-01 00:02:24.761840 | orchestrator | + all_tags = (known after apply) 2026-04-01 00:02:24.761843 | orchestrator | + dns_domain = (known after apply) 2026-04-01 00:02:24.761851 | orchestrator | + dns_name = (known after apply) 2026-04-01 00:02:24.761855 | orchestrator | + fixed_ip = (known after apply) 2026-04-01 00:02:24.761858 | orchestrator | + id = (known after apply) 2026-04-01 00:02:24.761862 | orchestrator | + pool = "public" 2026-04-01 00:02:24.761866 | orchestrator | + port_id = (known after apply) 2026-04-01 00:02:24.761870 | orchestrator | + region = (known after apply) 2026-04-01 00:02:24.761873 | orchestrator | + subnet_id = (known after apply) 2026-04-01 00:02:24.761877 | orchestrator | + tenant_id = (known after apply) 2026-04-01 00:02:24.761881 | orchestrator | } 2026-04-01 00:02:24.761885 | orchestrator | 2026-04-01 00:02:24.761888 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-04-01 00:02:24.761892 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-04-01 00:02:24.761896 | orchestrator | + admin_state_up = (known after apply) 2026-04-01 00:02:24.761900 | orchestrator | + all_tags = (known after apply) 2026-04-01 00:02:24.761903 | orchestrator | + availability_zone_hints = [ 2026-04-01 00:02:24.761907 | orchestrator | + "nova", 2026-04-01 00:02:24.761911 | orchestrator | ] 2026-04-01 00:02:24.761914 | orchestrator | + dns_domain = (known after apply) 2026-04-01 00:02:24.761918 | orchestrator | + external = (known after apply) 2026-04-01 00:02:24.761922 | orchestrator | + id = (known after apply) 2026-04-01 00:02:24.761926 | orchestrator | + mtu = (known after apply) 2026-04-01 00:02:24.761929 | orchestrator | + name = "net-testbed-management" 2026-04-01 00:02:24.761933 | orchestrator | + port_security_enabled = (known after apply) 2026-04-01 00:02:24.761941 | orchestrator | + qos_policy_id = (known after apply) 2026-04-01 00:02:24.761944 | orchestrator | + region = (known after apply) 2026-04-01 00:02:24.761948 | orchestrator | + shared = (known after apply) 2026-04-01 00:02:24.761952 | orchestrator | + tenant_id = (known after apply) 2026-04-01 00:02:24.761955 | orchestrator | + transparent_vlan = (known after apply) 2026-04-01 00:02:24.761959 | orchestrator | 2026-04-01 00:02:24.761963 | orchestrator | + segments (known after apply) 2026-04-01 00:02:24.761967 | orchestrator | } 2026-04-01 00:02:24.761970 | orchestrator | 2026-04-01 00:02:24.761974 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-04-01 00:02:24.761978 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-04-01 00:02:24.761981 | orchestrator | + admin_state_up = (known after apply) 2026-04-01 00:02:24.761985 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-01 00:02:24.761989 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-01 00:02:24.762033 | orchestrator | + all_tags = (known after apply) 2026-04-01 00:02:24.762038 | orchestrator | + device_id = (known after apply) 2026-04-01 00:02:24.762042 | orchestrator | + device_owner = (known after apply) 2026-04-01 00:02:24.762046 | orchestrator | + dns_assignment = (known after apply) 2026-04-01 00:02:24.762049 | orchestrator | + dns_name = (known after apply) 2026-04-01 00:02:24.762053 | orchestrator | + id = (known after apply) 2026-04-01 00:02:24.762057 | orchestrator | + mac_address = (known after apply) 2026-04-01 00:02:24.762060 | orchestrator | + network_id = (known after apply) 2026-04-01 00:02:24.762064 | orchestrator | + port_security_enabled = (known after apply) 2026-04-01 00:02:24.762068 | orchestrator | + qos_policy_id = (known after apply) 2026-04-01 00:02:24.762071 | orchestrator | + region = (known after apply) 2026-04-01 00:02:24.762075 | orchestrator | + security_group_ids = (known after apply) 2026-04-01 00:02:24.762079 | orchestrator | + tenant_id = (known after apply) 2026-04-01 00:02:24.762082 | orchestrator | 2026-04-01 00:02:24.762086 | orchestrator | + allowed_address_pairs { 2026-04-01 00:02:24.762090 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-01 00:02:24.762094 | orchestrator | } 2026-04-01 00:02:24.762097 | orchestrator | 2026-04-01 00:02:24.762101 | orchestrator | + binding (known after apply) 2026-04-01 00:02:24.762105 | orchestrator | 2026-04-01 00:02:24.762109 | orchestrator | + fixed_ip { 2026-04-01 00:02:24.762112 | orchestrator | + ip_address = "192.168.16.5" 2026-04-01 00:02:24.762116 | orchestrator | + subnet_id = (known after apply) 2026-04-01 00:02:24.762120 | orchestrator | } 2026-04-01 00:02:24.762124 | orchestrator | } 2026-04-01 00:02:24.762127 | orchestrator | 2026-04-01 00:02:24.762131 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-04-01 00:02:24.762135 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-01 00:02:24.762139 | orchestrator | + admin_state_up = (known after apply) 2026-04-01 00:02:24.762142 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-01 00:02:24.762146 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-01 00:02:24.762150 | orchestrator | + all_tags = (known after apply) 2026-04-01 00:02:24.762153 | orchestrator | + device_id = (known after apply) 2026-04-01 00:02:24.762157 | orchestrator | + device_owner = (known after apply) 2026-04-01 00:02:24.762161 | orchestrator | + dns_assignment = (known after apply) 2026-04-01 00:02:24.762164 | orchestrator | + dns_name = (known after apply) 2026-04-01 00:02:24.762168 | orchestrator | + id = (known after apply) 2026-04-01 00:02:24.762172 | orchestrator | + mac_address = (known after apply) 2026-04-01 00:02:24.762175 | orchestrator | + network_id = (known after apply) 2026-04-01 00:02:24.762179 | orchestrator | + port_security_enabled = (known after apply) 2026-04-01 00:02:24.762183 | orchestrator | + qos_policy_id = (known after apply) 2026-04-01 00:02:24.762186 | orchestrator | + region = (known after apply) 2026-04-01 00:02:24.762194 | orchestrator | + security_group_ids = (known after apply) 2026-04-01 00:02:24.762197 | orchestrator | + tenant_id = (known after apply) 2026-04-01 00:02:24.762201 | orchestrator | 2026-04-01 00:02:24.762205 | orchestrator | + allowed_address_pairs { 2026-04-01 00:02:24.762208 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-01 00:02:24.762212 | orchestrator | } 2026-04-01 00:02:24.762216 | orchestrator | + allowed_address_pairs { 2026-04-01 00:02:24.762219 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-01 00:02:24.762223 | orchestrator | } 2026-04-01 00:02:24.762227 | orchestrator | + allowed_address_pairs { 2026-04-01 00:02:24.762231 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-01 00:02:24.762234 | orchestrator | } 2026-04-01 00:02:24.762238 | orchestrator | 2026-04-01 00:02:24.762242 | orchestrator | + binding (known after apply) 2026-04-01 00:02:24.762245 | orchestrator | 2026-04-01 00:02:24.762249 | orchestrator | + fixed_ip { 2026-04-01 00:02:24.762253 | orchestrator | + ip_address = "192.168.16.10" 2026-04-01 00:02:24.762256 | orchestrator | + subnet_id = (known after apply) 2026-04-01 00:02:24.762260 | orchestrator | } 2026-04-01 00:02:24.762264 | orchestrator | } 2026-04-01 00:02:24.762268 | orchestrator | 2026-04-01 00:02:24.762271 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-04-01 00:02:24.762275 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-01 00:02:24.762279 | orchestrator | + admin_state_up = (known after apply) 2026-04-01 00:02:24.762288 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-01 00:02:24.762292 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-01 00:02:24.762296 | orchestrator | + all_tags = (known after apply) 2026-04-01 00:02:24.762300 | orchestrator | + device_id = (known after apply) 2026-04-01 00:02:24.762303 | orchestrator | + device_owner = (known after apply) 2026-04-01 00:02:24.762307 | orchestrator | + dns_assignment = (known after apply) 2026-04-01 00:02:24.762311 | orchestrator | + dns_name = (known after apply) 2026-04-01 00:02:24.762314 | orchestrator | + id = (known after apply) 2026-04-01 00:02:24.762318 | orchestrator | + mac_address = (known after apply) 2026-04-01 00:02:24.762322 | orchestrator | + network_id = (known after apply) 2026-04-01 00:02:24.762325 | orchestrator | + port_security_enabled = (known after apply) 2026-04-01 00:02:24.762329 | orchestrator | + qos_policy_id = (known after apply) 2026-04-01 00:02:24.762333 | orchestrator | + region = (known after apply) 2026-04-01 00:02:24.762337 | orchestrator | + security_group_ids = (known after apply) 2026-04-01 00:02:24.762340 | orchestrator | + tenant_id = (known after apply) 2026-04-01 00:02:24.762344 | orchestrator | 2026-04-01 00:02:24.762348 | orchestrator | + allowed_address_pairs { 2026-04-01 00:02:24.762351 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-01 00:02:24.762355 | orchestrator | } 2026-04-01 00:02:24.762359 | orchestrator | + allowed_address_pairs { 2026-04-01 00:02:24.762362 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-01 00:02:24.762366 | orchestrator | } 2026-04-01 00:02:24.762370 | orchestrator | + allowed_address_pairs { 2026-04-01 00:02:24.762373 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-01 00:02:24.762377 | orchestrator | } 2026-04-01 00:02:24.762381 | orchestrator | 2026-04-01 00:02:24.762385 | orchestrator | + binding (known after apply) 2026-04-01 00:02:24.762388 | orchestrator | 2026-04-01 00:02:24.762392 | orchestrator | + fixed_ip { 2026-04-01 00:02:24.762396 | orchestrator | + ip_address = "192.168.16.11" 2026-04-01 00:02:24.762399 | orchestrator | + subnet_id = (known after apply) 2026-04-01 00:02:24.762403 | orchestrator | } 2026-04-01 00:02:24.762407 | orchestrator | } 2026-04-01 00:02:24.762410 | orchestrator | 2026-04-01 00:02:24.762414 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-04-01 00:02:24.762418 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-01 00:02:24.762421 | orchestrator | + admin_state_up = (known after apply) 2026-04-01 00:02:24.762425 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-01 00:02:24.762429 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-01 00:02:24.762433 | orchestrator | + all_tags = (known after apply) 2026-04-01 00:02:24.762439 | orchestrator | + device_id = (known after apply) 2026-04-01 00:02:24.762443 | orchestrator | + device_owner = (known after apply) 2026-04-01 00:02:24.762447 | orchestrator | + dns_assignment = (known after apply) 2026-04-01 00:02:24.762450 | orchestrator | + dns_name = (known after apply) 2026-04-01 00:02:24.762457 | orchestrator | + id = (known after apply) 2026-04-01 00:02:24.762461 | orchestrator | + mac_address = (known after apply) 2026-04-01 00:02:24.762464 | orchestrator | + network_id = (known after apply) 2026-04-01 00:02:24.762468 | orchestrator | + port_security_enabled = (known after apply) 2026-04-01 00:02:24.762472 | orchestrator | + qos_policy_id = (known after apply) 2026-04-01 00:02:24.762475 | orchestrator | + region = (known after apply) 2026-04-01 00:02:24.762479 | orchestrator | + security_group_ids = (known after apply) 2026-04-01 00:02:24.762483 | orchestrator | + tenant_id = (known after apply) 2026-04-01 00:02:24.762487 | orchestrator | 2026-04-01 00:02:24.762490 | orchestrator | + allowed_address_pairs { 2026-04-01 00:02:24.762494 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-01 00:02:24.762498 | orchestrator | } 2026-04-01 00:02:24.762501 | orchestrator | + allowed_address_pairs { 2026-04-01 00:02:24.762505 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-01 00:02:24.762509 | orchestrator | } 2026-04-01 00:02:24.762512 | orchestrator | + allowed_address_pairs { 2026-04-01 00:02:24.762516 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-01 00:02:24.762520 | orchestrator | } 2026-04-01 00:02:24.762523 | orchestrator | 2026-04-01 00:02:24.762527 | orchestrator | + binding (known after apply) 2026-04-01 00:02:24.762531 | orchestrator | 2026-04-01 00:02:24.762535 | orchestrator | + fixed_ip { 2026-04-01 00:02:24.762538 | orchestrator | + ip_address = "192.168.16.12" 2026-04-01 00:02:24.762542 | orchestrator | + subnet_id = (known after apply) 2026-04-01 00:02:24.762546 | orchestrator | } 2026-04-01 00:02:24.762549 | orchestrator | } 2026-04-01 00:02:24.762553 | orchestrator | 2026-04-01 00:02:24.762557 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-04-01 00:02:24.762561 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-01 00:02:24.762564 | orchestrator | + admin_state_up = (known after apply) 2026-04-01 00:02:24.762568 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-01 00:02:24.762572 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-01 00:02:24.762576 | orchestrator | + all_tags = (known after apply) 2026-04-01 00:02:24.762579 | orchestrator | + device_id = (known after apply) 2026-04-01 00:02:24.762583 | orchestrator | + device_owner = (known after apply) 2026-04-01 00:02:24.762587 | orchestrator | + dns_assignment = (known after apply) 2026-04-01 00:02:24.762590 | orchestrator | + dns_name = (known after apply) 2026-04-01 00:02:24.762594 | orchestrator | + id = (known after apply) 2026-04-01 00:02:24.762598 | orchestrator | + mac_address = (known after apply) 2026-04-01 00:02:24.762601 | orchestrator | + network_id = (known after apply) 2026-04-01 00:02:24.762605 | orchestrator | + port_security_enabled = (known after apply) 2026-04-01 00:02:24.762609 | orchestrator | + qos_policy_id = (known after apply) 2026-04-01 00:02:24.762613 | orchestrator | + region = (known after apply) 2026-04-01 00:02:24.762616 | orchestrator | + security_group_ids = (known after apply) 2026-04-01 00:02:24.762620 | orchestrator | + tenant_id = (known after apply) 2026-04-01 00:02:24.762624 | orchestrator | 2026-04-01 00:02:24.762627 | orchestrator | + allowed_address_pairs { 2026-04-01 00:02:24.762631 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-01 00:02:24.762635 | orchestrator | } 2026-04-01 00:02:24.762639 | orchestrator | + allowed_address_pairs { 2026-04-01 00:02:24.762642 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-01 00:02:24.762646 | orchestrator | } 2026-04-01 00:02:24.762650 | orchestrator | + allowed_address_pairs { 2026-04-01 00:02:24.762653 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-01 00:02:24.762657 | orchestrator | } 2026-04-01 00:02:24.762661 | orchestrator | 2026-04-01 00:02:24.762667 | orchestrator | + binding (known after apply) 2026-04-01 00:02:24.762671 | orchestrator | 2026-04-01 00:02:24.762675 | orchestrator | + fixed_ip { 2026-04-01 00:02:24.762679 | orchestrator | + ip_address = "192.168.16.13" 2026-04-01 00:02:24.762682 | orchestrator | + subnet_id = (known after apply) 2026-04-01 00:02:24.762686 | orchestrator | } 2026-04-01 00:02:24.762690 | orchestrator | } 2026-04-01 00:02:24.762694 | orchestrator | 2026-04-01 00:02:24.762697 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-04-01 00:02:24.762704 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-01 00:02:24.762708 | orchestrator | + admin_state_up = (known after apply) 2026-04-01 00:02:24.762711 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-01 00:02:24.762715 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-01 00:02:24.762719 | orchestrator | + all_tags = (known after apply) 2026-04-01 00:02:24.762723 | orchestrator | + device_id = (known after apply) 2026-04-01 00:02:24.762726 | orchestrator | + device_owner = (known after apply) 2026-04-01 00:02:24.762730 | orchestrator | + dns_assignment = (known after apply) 2026-04-01 00:02:24.762733 | orchestrator | + dns_name = (known after apply) 2026-04-01 00:02:24.762737 | orchestrator | + id = (known after apply) 2026-04-01 00:02:24.762741 | orchestrator | + mac_address = (known after apply) 2026-04-01 00:02:24.762744 | orchestrator | + network_id = (known after apply) 2026-04-01 00:02:24.762748 | orchestrator | + port_security_enabled = (known after apply) 2026-04-01 00:02:24.762752 | orchestrator | + qos_policy_id = (known after apply) 2026-04-01 00:02:24.762755 | orchestrator | + region = (known after apply) 2026-04-01 00:02:24.762759 | orchestrator | + security_group_ids = (known after apply) 2026-04-01 00:02:24.762763 | orchestrator | + tenant_id = (known after apply) 2026-04-01 00:02:24.762767 | orchestrator | 2026-04-01 00:02:24.762770 | orchestrator | + allowed_address_pairs { 2026-04-01 00:02:24.762774 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-01 00:02:24.762778 | orchestrator | } 2026-04-01 00:02:24.762782 | orchestrator | + allowed_address_pairs { 2026-04-01 00:02:24.762785 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-01 00:02:24.762789 | orchestrator | } 2026-04-01 00:02:24.762793 | orchestrator | + allowed_address_pairs { 2026-04-01 00:02:24.762796 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-01 00:02:24.762800 | orchestrator | } 2026-04-01 00:02:24.762804 | orchestrator | 2026-04-01 00:02:24.762807 | orchestrator | + binding (known after apply) 2026-04-01 00:02:24.762811 | orchestrator | 2026-04-01 00:02:24.762815 | orchestrator | + fixed_ip { 2026-04-01 00:02:24.762818 | orchestrator | + ip_address = "192.168.16.14" 2026-04-01 00:02:24.762822 | orchestrator | + subnet_id = (known after apply) 2026-04-01 00:02:24.762826 | orchestrator | } 2026-04-01 00:02:24.762829 | orchestrator | } 2026-04-01 00:02:24.762833 | orchestrator | 2026-04-01 00:02:24.762837 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-04-01 00:02:24.762840 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-01 00:02:24.762844 | orchestrator | + admin_state_up = (known after apply) 2026-04-01 00:02:24.762848 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-01 00:02:24.762852 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-01 00:02:24.762855 | orchestrator | + all_tags = (known after apply) 2026-04-01 00:02:24.762859 | orchestrator | + device_id = (known after apply) 2026-04-01 00:02:24.762863 | orchestrator | + device_owner = (known after apply) 2026-04-01 00:02:24.762866 | orchestrator | + dns_assignment = (known after apply) 2026-04-01 00:02:24.762870 | orchestrator | + dns_name = (known after apply) 2026-04-01 00:02:24.762874 | orchestrator | + id = (known after apply) 2026-04-01 00:02:24.762877 | orchestrator | + mac_address = (known after apply) 2026-04-01 00:02:24.762881 | orchestrator | + network_id = (known after apply) 2026-04-01 00:02:24.762885 | orchestrator | + port_security_enabled = (known after apply) 2026-04-01 00:02:24.762888 | orchestrator | + qos_policy_id = (known after apply) 2026-04-01 00:02:24.762896 | orchestrator | + region = (known after apply) 2026-04-01 00:02:24.762899 | orchestrator | + security_group_ids = (known after apply) 2026-04-01 00:02:24.762903 | orchestrator | + tenant_id = (known after apply) 2026-04-01 00:02:24.762907 | orchestrator | 2026-04-01 00:02:24.762910 | orchestrator | + allowed_address_pairs { 2026-04-01 00:02:24.762914 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-01 00:02:24.762918 | orchestrator | } 2026-04-01 00:02:24.762921 | orchestrator | + allowed_address_pairs { 2026-04-01 00:02:24.762925 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-01 00:02:24.762929 | orchestrator | } 2026-04-01 00:02:24.762932 | orchestrator | + allowed_address_pairs { 2026-04-01 00:02:24.762936 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-01 00:02:24.762940 | orchestrator | } 2026-04-01 00:02:24.762943 | orchestrator | 2026-04-01 00:02:24.762951 | orchestrator | + binding (known after apply) 2026-04-01 00:02:24.762955 | orchestrator | 2026-04-01 00:02:24.762959 | orchestrator | + fixed_ip { 2026-04-01 00:02:24.762962 | orchestrator | + ip_address = "192.168.16.15" 2026-04-01 00:02:24.762966 | orchestrator | + subnet_id = (known after apply) 2026-04-01 00:02:24.762970 | orchestrator | } 2026-04-01 00:02:24.762973 | orchestrator | } 2026-04-01 00:02:24.762977 | orchestrator | 2026-04-01 00:02:24.762981 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-04-01 00:02:24.762985 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-04-01 00:02:24.762988 | orchestrator | + force_destroy = false 2026-04-01 00:02:24.763005 | orchestrator | + id = (known after apply) 2026-04-01 00:02:24.763009 | orchestrator | + port_id = (known after apply) 2026-04-01 00:02:24.763012 | orchestrator | + region = (known after apply) 2026-04-01 00:02:24.763016 | orchestrator | + router_id = (known after apply) 2026-04-01 00:02:24.763020 | orchestrator | + subnet_id = (known after apply) 2026-04-01 00:02:24.763024 | orchestrator | } 2026-04-01 00:02:24.763027 | orchestrator | 2026-04-01 00:02:24.763031 | orchestrator | # openstack_networking_router_v2.router will be created 2026-04-01 00:02:24.763035 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-04-01 00:02:24.763039 | orchestrator | + admin_state_up = (known after apply) 2026-04-01 00:02:24.763042 | orchestrator | + all_tags = (known after apply) 2026-04-01 00:02:24.763046 | orchestrator | + availability_zone_hints = [ 2026-04-01 00:02:24.763050 | orchestrator | + "nova", 2026-04-01 00:02:24.763054 | orchestrator | ] 2026-04-01 00:02:24.763057 | orchestrator | + distributed = (known after apply) 2026-04-01 00:02:24.763061 | orchestrator | + enable_snat = (known after apply) 2026-04-01 00:02:24.763065 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-04-01 00:02:24.763068 | orchestrator | + external_qos_policy_id = (known after apply) 2026-04-01 00:02:24.763072 | orchestrator | + id = (known after apply) 2026-04-01 00:02:24.763076 | orchestrator | + name = "testbed" 2026-04-01 00:02:24.763079 | orchestrator | + region = (known after apply) 2026-04-01 00:02:24.763083 | orchestrator | + tenant_id = (known after apply) 2026-04-01 00:02:24.763087 | orchestrator | 2026-04-01 00:02:24.763091 | orchestrator | + external_fixed_ip (known after apply) 2026-04-01 00:02:24.763094 | orchestrator | } 2026-04-01 00:02:24.763098 | orchestrator | 2026-04-01 00:02:24.763102 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-04-01 00:02:24.763108 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-04-01 00:02:24.763112 | orchestrator | + description = "ssh" 2026-04-01 00:02:24.763116 | orchestrator | + direction = "ingress" 2026-04-01 00:02:24.763120 | orchestrator | + ethertype = "IPv4" 2026-04-01 00:02:24.763123 | orchestrator | + id = (known after apply) 2026-04-01 00:02:24.763127 | orchestrator | + port_range_max = 22 2026-04-01 00:02:24.763131 | orchestrator | + port_range_min = 22 2026-04-01 00:02:24.763134 | orchestrator | + protocol = "tcp" 2026-04-01 00:02:24.763138 | orchestrator | + region = (known after apply) 2026-04-01 00:02:24.763145 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-01 00:02:24.763149 | orchestrator | + remote_group_id = (known after apply) 2026-04-01 00:02:24.763152 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-01 00:02:24.763156 | orchestrator | + security_group_id = (known after apply) 2026-04-01 00:02:24.763160 | orchestrator | + tenant_id = (known after apply) 2026-04-01 00:02:24.763163 | orchestrator | } 2026-04-01 00:02:24.763167 | orchestrator | 2026-04-01 00:02:24.763171 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-04-01 00:02:24.763174 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-04-01 00:02:24.763178 | orchestrator | + description = "wireguard" 2026-04-01 00:02:24.763182 | orchestrator | + direction = "ingress" 2026-04-01 00:02:24.763185 | orchestrator | + ethertype = "IPv4" 2026-04-01 00:02:24.763189 | orchestrator | + id = (known after apply) 2026-04-01 00:02:24.763193 | orchestrator | + port_range_max = 51820 2026-04-01 00:02:24.763197 | orchestrator | + port_range_min = 51820 2026-04-01 00:02:24.763200 | orchestrator | + protocol = "udp" 2026-04-01 00:02:24.763204 | orchestrator | + region = (known after apply) 2026-04-01 00:02:24.763207 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-01 00:02:24.763211 | orchestrator | + remote_group_id = (known after apply) 2026-04-01 00:02:24.763215 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-01 00:02:24.763218 | orchestrator | + security_group_id = (known after apply) 2026-04-01 00:02:24.763222 | orchestrator | + tenant_id = (known after apply) 2026-04-01 00:02:24.763226 | orchestrator | } 2026-04-01 00:02:24.763230 | orchestrator | 2026-04-01 00:02:24.763233 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-04-01 00:02:24.763237 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-04-01 00:02:24.763241 | orchestrator | + direction = "ingress" 2026-04-01 00:02:24.763244 | orchestrator | + ethertype = "IPv4" 2026-04-01 00:02:24.763248 | orchestrator | + id = (known after apply) 2026-04-01 00:02:24.763252 | orchestrator | + protocol = "tcp" 2026-04-01 00:02:24.763255 | orchestrator | + region = (known after apply) 2026-04-01 00:02:24.763259 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-01 00:02:24.763263 | orchestrator | + remote_group_id = (known after apply) 2026-04-01 00:02:24.763266 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-04-01 00:02:24.763270 | orchestrator | + security_group_id = (known after apply) 2026-04-01 00:02:24.763274 | orchestrator | + tenant_id = (known after apply) 2026-04-01 00:02:24.763277 | orchestrator | } 2026-04-01 00:02:24.763281 | orchestrator | 2026-04-01 00:02:24.763285 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-04-01 00:02:24.763288 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-04-01 00:02:24.763292 | orchestrator | + direction = "ingress" 2026-04-01 00:02:24.763296 | orchestrator | + ethertype = "IPv4" 2026-04-01 00:02:24.763299 | orchestrator | + id = (known after apply) 2026-04-01 00:02:24.763303 | orchestrator | + protocol = "udp" 2026-04-01 00:02:24.763307 | orchestrator | + region = (known after apply) 2026-04-01 00:02:24.763310 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-01 00:02:24.763314 | orchestrator | + remote_group_id = (known after apply) 2026-04-01 00:02:24.763318 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-04-01 00:02:24.763322 | orchestrator | + security_group_id = (known after apply) 2026-04-01 00:02:24.763325 | orchestrator | + tenant_id = (known after apply) 2026-04-01 00:02:24.763329 | orchestrator | } 2026-04-01 00:02:24.763333 | orchestrator | 2026-04-01 00:02:24.763336 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-04-01 00:02:24.763343 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-04-01 00:02:24.763346 | orchestrator | + direction = "ingress" 2026-04-01 00:02:24.763350 | orchestrator | + ethertype = "IPv4" 2026-04-01 00:02:24.763354 | orchestrator | + id = (known after apply) 2026-04-01 00:02:24.763357 | orchestrator | + protocol = "icmp" 2026-04-01 00:02:24.763361 | orchestrator | + region = (known after apply) 2026-04-01 00:02:24.763365 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-01 00:02:24.763368 | orchestrator | + remote_group_id = (known after apply) 2026-04-01 00:02:24.763372 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-01 00:02:24.763376 | orchestrator | + security_group_id = (known after apply) 2026-04-01 00:02:24.763379 | orchestrator | + tenant_id = (known after apply) 2026-04-01 00:02:24.763383 | orchestrator | } 2026-04-01 00:02:24.763387 | orchestrator | 2026-04-01 00:02:24.763391 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-04-01 00:02:24.763394 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-04-01 00:02:24.763398 | orchestrator | + direction = "ingress" 2026-04-01 00:02:24.763402 | orchestrator | + ethertype = "IPv4" 2026-04-01 00:02:24.763405 | orchestrator | + id = (known after apply) 2026-04-01 00:02:24.763409 | orchestrator | + protocol = "tcp" 2026-04-01 00:02:24.763413 | orchestrator | + region = (known after apply) 2026-04-01 00:02:24.763417 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-01 00:02:24.763423 | orchestrator | + remote_group_id = (known after apply) 2026-04-01 00:02:24.763427 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-01 00:02:24.763433 | orchestrator | + security_group_id = (known after apply) 2026-04-01 00:02:24.763437 | orchestrator | + tenant_id = (known after apply) 2026-04-01 00:02:24.763440 | orchestrator | } 2026-04-01 00:02:24.763444 | orchestrator | 2026-04-01 00:02:24.763448 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-04-01 00:02:24.763451 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-04-01 00:02:24.763455 | orchestrator | + direction = "ingress" 2026-04-01 00:02:24.763459 | orchestrator | + ethertype = "IPv4" 2026-04-01 00:02:24.763463 | orchestrator | + id = (known after apply) 2026-04-01 00:02:24.763466 | orchestrator | + protocol = "udp" 2026-04-01 00:02:24.763470 | orchestrator | + region = (known after apply) 2026-04-01 00:02:24.763474 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-01 00:02:24.763477 | orchestrator | + remote_group_id = (known after apply) 2026-04-01 00:02:24.763481 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-01 00:02:24.763485 | orchestrator | + security_group_id = (known after apply) 2026-04-01 00:02:24.763489 | orchestrator | + tenant_id = (known after apply) 2026-04-01 00:02:24.763492 | orchestrator | } 2026-04-01 00:02:24.763496 | orchestrator | 2026-04-01 00:02:24.763500 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-04-01 00:02:24.763503 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-04-01 00:02:24.763507 | orchestrator | + direction = "ingress" 2026-04-01 00:02:24.763513 | orchestrator | + ethertype = "IPv4" 2026-04-01 00:02:24.763517 | orchestrator | + id = (known after apply) 2026-04-01 00:02:24.763521 | orchestrator | + protocol = "icmp" 2026-04-01 00:02:24.763524 | orchestrator | + region = (known after apply) 2026-04-01 00:02:24.763528 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-01 00:02:24.763532 | orchestrator | + remote_group_id = (known after apply) 2026-04-01 00:02:24.763535 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-01 00:02:24.763539 | orchestrator | + security_group_id = (known after apply) 2026-04-01 00:02:24.763543 | orchestrator | + tenant_id = (known after apply) 2026-04-01 00:02:24.763549 | orchestrator | } 2026-04-01 00:02:24.763553 | orchestrator | 2026-04-01 00:02:24.763556 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-04-01 00:02:24.763560 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-04-01 00:02:24.763564 | orchestrator | + description = "vrrp" 2026-04-01 00:02:24.763568 | orchestrator | + direction = "ingress" 2026-04-01 00:02:24.763571 | orchestrator | + ethertype = "IPv4" 2026-04-01 00:02:24.763575 | orchestrator | + id = (known after apply) 2026-04-01 00:02:24.763579 | orchestrator | + protocol = "112" 2026-04-01 00:02:24.763582 | orchestrator | + region = (known after apply) 2026-04-01 00:02:24.763586 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-01 00:02:24.763590 | orchestrator | + remote_group_id = (known after apply) 2026-04-01 00:02:24.763593 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-01 00:02:24.763597 | orchestrator | + security_group_id = (known after apply) 2026-04-01 00:02:24.763601 | orchestrator | + tenant_id = (known after apply) 2026-04-01 00:02:24.763605 | orchestrator | } 2026-04-01 00:02:24.763608 | orchestrator | 2026-04-01 00:02:24.763612 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-04-01 00:02:24.763616 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-04-01 00:02:24.763619 | orchestrator | + all_tags = (known after apply) 2026-04-01 00:02:24.763623 | orchestrator | + description = "management security group" 2026-04-01 00:02:24.763627 | orchestrator | + id = (known after apply) 2026-04-01 00:02:24.763631 | orchestrator | + name = "testbed-management" 2026-04-01 00:02:24.763635 | orchestrator | + region = (known after apply) 2026-04-01 00:02:24.763638 | orchestrator | + stateful = (known after apply) 2026-04-01 00:02:24.763642 | orchestrator | + tenant_id = (known after apply) 2026-04-01 00:02:24.763646 | orchestrator | } 2026-04-01 00:02:24.763649 | orchestrator | 2026-04-01 00:02:24.763653 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-04-01 00:02:24.763657 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-04-01 00:02:24.763661 | orchestrator | + all_tags = (known after apply) 2026-04-01 00:02:24.763664 | orchestrator | + description = "node security group" 2026-04-01 00:02:24.763668 | orchestrator | + id = (known after apply) 2026-04-01 00:02:24.763672 | orchestrator | + name = "testbed-node" 2026-04-01 00:02:24.763675 | orchestrator | + region = (known after apply) 2026-04-01 00:02:24.763679 | orchestrator | + stateful = (known after apply) 2026-04-01 00:02:24.763683 | orchestrator | + tenant_id = (known after apply) 2026-04-01 00:02:24.763686 | orchestrator | } 2026-04-01 00:02:24.763690 | orchestrator | 2026-04-01 00:02:24.763694 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-04-01 00:02:24.763697 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-04-01 00:02:24.763701 | orchestrator | + all_tags = (known after apply) 2026-04-01 00:02:24.763705 | orchestrator | + cidr = "192.168.16.0/20" 2026-04-01 00:02:24.763709 | orchestrator | + dns_nameservers = [ 2026-04-01 00:02:24.763712 | orchestrator | + "8.8.8.8", 2026-04-01 00:02:24.763716 | orchestrator | + "9.9.9.9", 2026-04-01 00:02:24.763720 | orchestrator | ] 2026-04-01 00:02:24.763723 | orchestrator | + enable_dhcp = true 2026-04-01 00:02:24.763727 | orchestrator | + gateway_ip = (known after apply) 2026-04-01 00:02:24.763731 | orchestrator | + id = (known after apply) 2026-04-01 00:02:24.763735 | orchestrator | + ip_version = 4 2026-04-01 00:02:24.763738 | orchestrator | + ipv6_address_mode = (known after apply) 2026-04-01 00:02:24.763742 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-04-01 00:02:24.763746 | orchestrator | + name = "subnet-testbed-management" 2026-04-01 00:02:24.763750 | orchestrator | + network_id = (known after apply) 2026-04-01 00:02:24.763753 | orchestrator | + no_gateway = false 2026-04-01 00:02:24.763757 | orchestrator | + region = (known after apply) 2026-04-01 00:02:24.763761 | orchestrator | + service_types = (known after apply) 2026-04-01 00:02:24.763767 | orchestrator | + tenant_id = (known after apply) 2026-04-01 00:02:24.763771 | orchestrator | 2026-04-01 00:02:24.763774 | orchestrator | + allocation_pool { 2026-04-01 00:02:24.763778 | orchestrator | + end = "192.168.31.250" 2026-04-01 00:02:24.763782 | orchestrator | + start = "192.168.31.200" 2026-04-01 00:02:24.763785 | orchestrator | } 2026-04-01 00:02:24.763789 | orchestrator | } 2026-04-01 00:02:24.763793 | orchestrator | 2026-04-01 00:02:24.763796 | orchestrator | # terraform_data.image will be created 2026-04-01 00:02:24.763800 | orchestrator | + resource "terraform_data" "image" { 2026-04-01 00:02:24.763806 | orchestrator | + id = (known after apply) 2026-04-01 00:02:24.763810 | orchestrator | + input = "Ubuntu 24.04" 2026-04-01 00:02:24.763813 | orchestrator | + output = (known after apply) 2026-04-01 00:02:24.763817 | orchestrator | } 2026-04-01 00:02:24.763821 | orchestrator | 2026-04-01 00:02:24.763824 | orchestrator | # terraform_data.image_node will be created 2026-04-01 00:02:24.763828 | orchestrator | + resource "terraform_data" "image_node" { 2026-04-01 00:02:24.763832 | orchestrator | + id = (known after apply) 2026-04-01 00:02:24.763835 | orchestrator | + input = "Ubuntu 24.04" 2026-04-01 00:02:24.763839 | orchestrator | + output = (known after apply) 2026-04-01 00:02:24.763843 | orchestrator | } 2026-04-01 00:02:24.763846 | orchestrator | 2026-04-01 00:02:24.763850 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-04-01 00:02:24.763854 | orchestrator | 2026-04-01 00:02:24.763858 | orchestrator | Changes to Outputs: 2026-04-01 00:02:24.763861 | orchestrator | + manager_address = (sensitive value) 2026-04-01 00:02:24.763865 | orchestrator | + private_key = (sensitive value) 2026-04-01 00:02:25.025489 | orchestrator | terraform_data.image_node: Creating... 2026-04-01 00:02:25.025912 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=df7eab6f-d8cd-91a1-8713-4e727b3c956a] 2026-04-01 00:02:25.026699 | orchestrator | terraform_data.image: Creating... 2026-04-01 00:02:25.027600 | orchestrator | terraform_data.image: Creation complete after 0s [id=bb6f7e70-5355-bfa9-cf42-49cb71d2dd26] 2026-04-01 00:02:27.544329 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-04-01 00:02:27.552819 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-04-01 00:02:27.555192 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-04-01 00:02:27.556365 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-04-01 00:02:27.556750 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-04-01 00:02:27.557916 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-04-01 00:02:27.557976 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-04-01 00:02:27.557986 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-04-01 00:02:27.558718 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-04-01 00:02:27.562383 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-04-01 00:02:28.010442 | orchestrator | data.openstack_images_image_v2.image: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-04-01 00:02:28.018063 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-04-01 00:02:28.112526 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 0s [id=testbed] 2026-04-01 00:02:28.121372 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-04-01 00:02:28.509854 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 1s [id=d1f9d332-8fa9-40b5-94e9-747cd4c7ca0e] 2026-04-01 00:02:28.514669 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-04-01 00:02:28.570442 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-04-01 00:02:28.582076 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-04-01 00:02:31.185754 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 3s [id=b090d968-077f-4316-a7cb-bda539f6db67] 2026-04-01 00:02:31.201649 | orchestrator | local_file.id_rsa_pub: Creating... 2026-04-01 00:02:31.211457 | orchestrator | local_file.id_rsa_pub: Creation complete after 0s [id=cde0314278869988341014135e87d58a6684b483] 2026-04-01 00:02:31.220603 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 3s [id=e5794c61-1895-432b-bae0-e64b20adb363] 2026-04-01 00:02:31.221774 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-04-01 00:02:31.225442 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-04-01 00:02:31.225535 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 0s [id=b3d2aa6da0500e21b4a2aad9c1b8fb8dbf01c485] 2026-04-01 00:02:31.230803 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-04-01 00:02:31.249233 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 3s [id=ac6b0a42-475e-47b3-b6b9-8775ae6256f7] 2026-04-01 00:02:31.249296 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 3s [id=1a9aff5c-ee70-4834-ada6-16d88406b9f4] 2026-04-01 00:02:31.252631 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-04-01 00:02:31.253569 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-04-01 00:02:31.270402 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 3s [id=181eb0d3-49bd-41f1-8f26-95e9754c9896] 2026-04-01 00:02:31.274050 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-04-01 00:02:31.278463 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 3s [id=91aabfbd-d205-4d26-bb68-6c75b4d02402] 2026-04-01 00:02:31.282220 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-04-01 00:02:31.295978 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 3s [id=289dd0c3-dff8-4236-9edf-8ec702693da7] 2026-04-01 00:02:31.299415 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-04-01 00:02:31.332213 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 3s [id=a922e28e-6911-40ed-8ea7-c2624142d8a1] 2026-04-01 00:02:31.371625 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 3s [id=93c245f0-d55e-41f5-879e-2175ba1dd005] 2026-04-01 00:02:31.920853 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 3s [id=c0be60f3-dcdb-44b4-942d-5c7c69229fe7] 2026-04-01 00:02:33.598133 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 3s [id=cd788266-7e24-456a-8ea6-00247502885d] 2026-04-01 00:02:33.605212 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-04-01 00:02:34.644017 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 4s [id=a7e8e07f-8fa0-4520-8bbc-80ec122b709d] 2026-04-01 00:02:34.733129 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 4s [id=bc455e6b-b8ba-47d8-ab01-6be8b039ad3d] 2026-04-01 00:02:34.753653 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 4s [id=afbe02e5-eb4b-4e1e-8854-e9a45bf0751c] 2026-04-01 00:02:34.782811 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 4s [id=bb71c8c2-ca64-4a55-b962-663cadefaf49] 2026-04-01 00:02:34.806494 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 4s [id=2a4f4914-3be5-4bda-a47c-01d9519cb486] 2026-04-01 00:02:34.846251 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 4s [id=acd982d5-be51-4ada-8242-b77ed84f08a9] 2026-04-01 00:02:37.883405 | orchestrator | openstack_networking_router_v2.router: Creation complete after 4s [id=a9fe2aa4-1640-45a4-9b89-f9d651422775] 2026-04-01 00:02:37.890406 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-04-01 00:02:37.890951 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-04-01 00:02:37.891561 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-04-01 00:02:38.103564 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=abb9971a-6c18-4ab4-a7f9-f689337799bc] 2026-04-01 00:02:38.115448 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-04-01 00:02:38.119128 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-04-01 00:02:38.119182 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-04-01 00:02:38.126787 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-04-01 00:02:38.127340 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-04-01 00:02:38.130634 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-04-01 00:02:38.305019 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 0s [id=05fcf4c3-76e8-4446-a3bf-17710084806f] 2026-04-01 00:02:38.310929 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=477bd981-a275-43ad-b920-71dfcee2a23c] 2026-04-01 00:02:38.322145 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-04-01 00:02:38.322230 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-04-01 00:02:38.322244 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-04-01 00:02:38.326040 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-04-01 00:02:38.562894 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=10509cea-86eb-4280-8f10-4aae9a71cdce] 2026-04-01 00:02:38.574612 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-04-01 00:02:38.629849 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 1s [id=3ccb1c57-732e-4343-b4a9-8c1061b98bb2] 2026-04-01 00:02:38.642909 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-04-01 00:02:38.740357 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=e4c995fc-2ecb-4091-bbc0-253340ef4688] 2026-04-01 00:02:38.746300 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-04-01 00:02:38.834262 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 1s [id=116df407-6380-4a15-9731-e8cc4f80da84] 2026-04-01 00:02:38.843770 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-04-01 00:02:38.912196 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=b04f8416-6bff-43a7-b791-c6a9a1a3dfe3] 2026-04-01 00:02:38.924261 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-04-01 00:02:39.048273 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 1s [id=62dcba12-addc-451e-9a61-78d47a4e2eef] 2026-04-01 00:02:39.055750 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-04-01 00:02:39.096830 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=4148aa15-8ca1-4ec1-a4ab-790a33067930] 2026-04-01 00:02:39.211613 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=f2a834cc-f0b5-43f4-b2a9-f68ef9ef46c3] 2026-04-01 00:02:39.225937 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=1c0f3d37-f062-44ce-96be-e8f94c70a28f] 2026-04-01 00:02:39.529347 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=fc256285-2006-4b56-9161-af80b0afa8f1] 2026-04-01 00:02:39.665451 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=90ecc4f2-c472-426a-937f-6901c27de743] 2026-04-01 00:02:39.693649 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=9cb1feae-a58a-4c48-86d3-e38988c0517c] 2026-04-01 00:02:39.791446 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=00e5b56c-6b35-4e88-abec-e08a4d55e2a1] 2026-04-01 00:02:39.794385 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=8afaa5f5-c130-4dc1-99f9-3ca1dae4c2c1] 2026-04-01 00:02:39.918903 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=5c4c43eb-7eb5-4e70-bddc-22f3a0eb758a] 2026-04-01 00:02:40.961689 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 3s [id=c16c7a5b-cf84-47a3-8d2f-f37e9a8f1475] 2026-04-01 00:02:40.985747 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-04-01 00:02:40.991892 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-04-01 00:02:41.000468 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-04-01 00:02:41.007817 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-04-01 00:02:41.010382 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-04-01 00:02:41.017687 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-04-01 00:02:41.018592 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-04-01 00:02:43.157275 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 2s [id=5c824151-f0ad-436f-ae6c-da9b443eea68] 2026-04-01 00:02:43.169959 | orchestrator | local_file.inventory: Creating... 2026-04-01 00:02:43.170079 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-04-01 00:02:43.173206 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-04-01 00:02:43.174042 | orchestrator | local_file.inventory: Creation complete after 0s [id=69ab4fdffad370077a19d089dd773560c3873166] 2026-04-01 00:02:43.180251 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 0s [id=1e1a8f09263a9cccb5c701d1e4994442773162ff] 2026-04-01 00:02:44.001843 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=5c824151-f0ad-436f-ae6c-da9b443eea68] 2026-04-01 00:02:50.993268 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-04-01 00:02:51.003878 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-04-01 00:02:51.009140 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-04-01 00:02:51.015544 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-04-01 00:02:51.019917 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-04-01 00:02:51.020022 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-04-01 00:03:01.001222 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-04-01 00:03:01.004436 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-04-01 00:03:01.009877 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-04-01 00:03:01.016172 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-04-01 00:03:01.020472 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-04-01 00:03:01.020736 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-04-01 00:03:11.009861 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2026-04-01 00:03:11.010123 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2026-04-01 00:03:11.010867 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2026-04-01 00:03:11.017277 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2026-04-01 00:03:11.021531 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2026-04-01 00:03:11.021595 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2026-04-01 00:03:11.749405 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 31s [id=0aced48e-82ad-462c-9dd1-569c314072d7] 2026-04-01 00:03:21.019061 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [40s elapsed] 2026-04-01 00:03:21.019169 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [40s elapsed] 2026-04-01 00:03:21.019183 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [40s elapsed] 2026-04-01 00:03:21.022424 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [40s elapsed] 2026-04-01 00:03:21.022530 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [40s elapsed] 2026-04-01 00:03:31.027835 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [50s elapsed] 2026-04-01 00:03:31.027969 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [50s elapsed] 2026-04-01 00:03:31.027981 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [50s elapsed] 2026-04-01 00:03:31.027998 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [50s elapsed] 2026-04-01 00:03:31.028005 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [50s elapsed] 2026-04-01 00:03:41.036486 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [1m0s elapsed] 2026-04-01 00:03:41.036572 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [1m0s elapsed] 2026-04-01 00:03:41.036579 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [1m0s elapsed] 2026-04-01 00:03:41.036591 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [1m0s elapsed] 2026-04-01 00:03:41.036595 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [1m0s elapsed] 2026-04-01 00:03:41.866750 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 1m1s [id=37f685e9-0fab-4c2c-a0b3-165beced17c8] 2026-04-01 00:03:51.040756 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [1m10s elapsed] 2026-04-01 00:03:51.040854 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [1m10s elapsed] 2026-04-01 00:03:51.040864 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [1m10s elapsed] 2026-04-01 00:03:51.040870 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [1m10s elapsed] 2026-04-01 00:03:51.975982 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 1m11s [id=3104c57a-c130-4db8-9216-ab77e6b810b6] 2026-04-01 00:03:52.440316 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 1m11s [id=2265ba43-befb-4164-a82d-784eb86a1671] 2026-04-01 00:03:52.491161 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 1m11s [id=41d5c39a-6df8-4a3c-a64e-d7e163fdfd15] 2026-04-01 00:04:01.041039 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [1m20s elapsed] 2026-04-01 00:04:02.539586 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 1m22s [id=47c4d131-018f-48aa-8df3-2861e522ba69] 2026-04-01 00:04:02.608435 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-04-01 00:04:02.609707 | orchestrator | null_resource.node_semaphore: Creating... 2026-04-01 00:04:02.616278 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-04-01 00:04:02.623696 | orchestrator | null_resource.node_semaphore: Creation complete after 0s [id=1765288559445690444] 2026-04-01 00:04:02.623770 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-04-01 00:04:02.624411 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-04-01 00:04:02.624443 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-04-01 00:04:02.626060 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-04-01 00:04:02.648385 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-04-01 00:04:02.660187 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-04-01 00:04:02.668978 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-04-01 00:04:02.679220 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-04-01 00:04:06.026100 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 3s [id=3104c57a-c130-4db8-9216-ab77e6b810b6/a922e28e-6911-40ed-8ea7-c2624142d8a1] 2026-04-01 00:04:06.047231 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 3s [id=2265ba43-befb-4164-a82d-784eb86a1671/b090d968-077f-4316-a7cb-bda539f6db67] 2026-04-01 00:04:06.051970 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 3s [id=47c4d131-018f-48aa-8df3-2861e522ba69/289dd0c3-dff8-4236-9edf-8ec702693da7] 2026-04-01 00:04:06.070361 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 3s [id=3104c57a-c130-4db8-9216-ab77e6b810b6/91aabfbd-d205-4d26-bb68-6c75b4d02402] 2026-04-01 00:04:06.085066 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 3s [id=2265ba43-befb-4164-a82d-784eb86a1671/ac6b0a42-475e-47b3-b6b9-8775ae6256f7] 2026-04-01 00:04:06.086723 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 3s [id=47c4d131-018f-48aa-8df3-2861e522ba69/93c245f0-d55e-41f5-879e-2175ba1dd005] 2026-04-01 00:04:12.179545 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 9s [id=3104c57a-c130-4db8-9216-ab77e6b810b6/181eb0d3-49bd-41f1-8f26-95e9754c9896] 2026-04-01 00:04:12.183625 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 9s [id=47c4d131-018f-48aa-8df3-2861e522ba69/1a9aff5c-ee70-4834-ada6-16d88406b9f4] 2026-04-01 00:04:12.210734 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 9s [id=2265ba43-befb-4164-a82d-784eb86a1671/e5794c61-1895-432b-bae0-e64b20adb363] 2026-04-01 00:04:12.675069 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-04-01 00:04:22.684326 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-04-01 00:04:23.217341 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 20s [id=dd55c019-e8bb-4377-b9e7-44acdffe213d] 2026-04-01 00:04:23.656447 | orchestrator | 2026-04-01 00:04:23.656530 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-04-01 00:04:23.656545 | orchestrator | 2026-04-01 00:04:23.656554 | orchestrator | Outputs: 2026-04-01 00:04:23.656558 | orchestrator | 2026-04-01 00:04:23.656562 | orchestrator | manager_address = 2026-04-01 00:04:23.656567 | orchestrator | private_key = 2026-04-01 00:04:23.757174 | orchestrator | ok: Runtime: 0:02:06.107553 2026-04-01 00:04:23.778053 | 2026-04-01 00:04:23.778176 | TASK [Fetch manager address] 2026-04-01 00:04:24.279371 | orchestrator | ok 2026-04-01 00:04:24.288538 | 2026-04-01 00:04:24.288668 | TASK [Set manager_host address] 2026-04-01 00:04:24.360241 | orchestrator | ok 2026-04-01 00:04:24.367249 | 2026-04-01 00:04:24.367360 | LOOP [Update ansible collections] 2026-04-01 00:04:25.378162 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-04-01 00:04:25.378436 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-04-01 00:04:25.378474 | orchestrator | Starting galaxy collection install process 2026-04-01 00:04:25.378498 | orchestrator | Process install dependency map 2026-04-01 00:04:25.378530 | orchestrator | Starting collection install process 2026-04-01 00:04:25.378552 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed01/.ansible/collections/ansible_collections/osism/commons' 2026-04-01 00:04:25.378577 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed01/.ansible/collections/ansible_collections/osism/commons 2026-04-01 00:04:25.378602 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-04-01 00:04:25.378655 | orchestrator | ok: Item: commons Runtime: 0:00:00.632543 2026-04-01 00:04:26.275703 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-04-01 00:04:26.275942 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-04-01 00:04:26.276021 | orchestrator | Starting galaxy collection install process 2026-04-01 00:04:26.276081 | orchestrator | Process install dependency map 2026-04-01 00:04:26.276135 | orchestrator | Starting collection install process 2026-04-01 00:04:26.276186 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed01/.ansible/collections/ansible_collections/osism/services' 2026-04-01 00:04:26.276236 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed01/.ansible/collections/ansible_collections/osism/services 2026-04-01 00:04:26.276283 | orchestrator | osism.services:999.0.0 was installed successfully 2026-04-01 00:04:26.276359 | orchestrator | ok: Item: services Runtime: 0:00:00.618358 2026-04-01 00:04:26.288368 | 2026-04-01 00:04:26.288498 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-04-01 00:04:36.898232 | orchestrator | ok 2026-04-01 00:04:36.908230 | 2026-04-01 00:04:36.908369 | TASK [Wait a little longer for the manager so that everything is ready] 2026-04-01 00:05:36.950196 | orchestrator | ok 2026-04-01 00:05:36.961609 | 2026-04-01 00:05:36.961731 | TASK [Fetch manager ssh hostkey] 2026-04-01 00:05:38.534969 | orchestrator | Output suppressed because no_log was given 2026-04-01 00:05:38.547212 | 2026-04-01 00:05:38.547430 | TASK [Get ssh keypair from terraform environment] 2026-04-01 00:05:39.105638 | orchestrator | ok: Runtime: 0:00:00.007211 2026-04-01 00:05:39.115292 | 2026-04-01 00:05:39.115418 | TASK [Point out that the following task takes some time and does not give any output] 2026-04-01 00:05:39.145873 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-04-01 00:05:39.152950 | 2026-04-01 00:05:39.153086 | TASK [Run manager part 0] 2026-04-01 00:05:40.241481 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-04-01 00:05:40.293290 | orchestrator | 2026-04-01 00:05:40.293359 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-04-01 00:05:40.293368 | orchestrator | 2026-04-01 00:05:40.293385 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-04-01 00:05:42.259570 | orchestrator | ok: [testbed-manager] 2026-04-01 00:05:42.259630 | orchestrator | 2026-04-01 00:05:42.259655 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-04-01 00:05:42.259665 | orchestrator | 2026-04-01 00:05:42.259674 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-01 00:05:44.213276 | orchestrator | ok: [testbed-manager] 2026-04-01 00:05:44.213356 | orchestrator | 2026-04-01 00:05:44.213377 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-04-01 00:05:44.902569 | orchestrator | ok: [testbed-manager] 2026-04-01 00:05:44.902841 | orchestrator | 2026-04-01 00:05:44.902864 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-04-01 00:05:44.953476 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:05:44.953539 | orchestrator | 2026-04-01 00:05:44.953550 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-04-01 00:05:44.989012 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:05:44.989078 | orchestrator | 2026-04-01 00:05:44.989086 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-04-01 00:05:45.024443 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:05:45.024499 | orchestrator | 2026-04-01 00:05:45.024507 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-04-01 00:05:45.751175 | orchestrator | changed: [testbed-manager] 2026-04-01 00:05:45.751251 | orchestrator | 2026-04-01 00:05:45.751266 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-04-01 00:09:06.667715 | orchestrator | changed: [testbed-manager] 2026-04-01 00:09:06.667797 | orchestrator | 2026-04-01 00:09:06.667811 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-04-01 00:10:36.747748 | orchestrator | changed: [testbed-manager] 2026-04-01 00:10:36.747896 | orchestrator | 2026-04-01 00:10:36.747911 | orchestrator | TASK [Install required packages] *********************************************** 2026-04-01 00:11:56.257389 | orchestrator | changed: [testbed-manager] 2026-04-01 00:11:56.257769 | orchestrator | 2026-04-01 00:11:56.257798 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-04-01 00:12:04.786100 | orchestrator | changed: [testbed-manager] 2026-04-01 00:12:04.786168 | orchestrator | 2026-04-01 00:12:04.786186 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-04-01 00:12:04.836942 | orchestrator | ok: [testbed-manager] 2026-04-01 00:12:04.837050 | orchestrator | 2026-04-01 00:12:04.837080 | orchestrator | TASK [Get current user] ******************************************************** 2026-04-01 00:12:05.603419 | orchestrator | ok: [testbed-manager] 2026-04-01 00:12:05.603565 | orchestrator | 2026-04-01 00:12:05.603587 | orchestrator | TASK [Create venv directory] *************************************************** 2026-04-01 00:12:06.288182 | orchestrator | changed: [testbed-manager] 2026-04-01 00:12:06.288228 | orchestrator | 2026-04-01 00:12:06.288239 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-04-01 00:12:12.146387 | orchestrator | changed: [testbed-manager] 2026-04-01 00:12:12.146440 | orchestrator | 2026-04-01 00:12:12.146455 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-04-01 00:12:17.591768 | orchestrator | changed: [testbed-manager] 2026-04-01 00:12:17.591815 | orchestrator | 2026-04-01 00:12:17.591824 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-04-01 00:12:20.131453 | orchestrator | changed: [testbed-manager] 2026-04-01 00:12:20.131562 | orchestrator | 2026-04-01 00:12:20.131581 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-04-01 00:12:21.845550 | orchestrator | changed: [testbed-manager] 2026-04-01 00:12:21.845654 | orchestrator | 2026-04-01 00:12:21.845671 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-04-01 00:12:22.921183 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-04-01 00:12:22.921235 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-04-01 00:12:22.921242 | orchestrator | 2026-04-01 00:12:22.921249 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-04-01 00:12:22.959014 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-04-01 00:12:22.959079 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-04-01 00:12:22.959091 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-04-01 00:12:22.959103 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-04-01 00:12:28.692501 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-04-01 00:12:28.692660 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-04-01 00:12:28.692673 | orchestrator | 2026-04-01 00:12:28.692681 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-04-01 00:12:29.248166 | orchestrator | changed: [testbed-manager] 2026-04-01 00:12:29.248207 | orchestrator | 2026-04-01 00:12:29.248215 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-04-01 00:15:51.978653 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-04-01 00:15:51.978736 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-04-01 00:15:51.978748 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-04-01 00:15:51.978756 | orchestrator | 2026-04-01 00:15:51.978764 | orchestrator | TASK [Install local collections] *********************************************** 2026-04-01 00:15:54.297799 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-04-01 00:15:54.297932 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-04-01 00:15:54.297951 | orchestrator | 2026-04-01 00:15:54.297965 | orchestrator | PLAY [Create operator user] **************************************************** 2026-04-01 00:15:54.297976 | orchestrator | 2026-04-01 00:15:54.297987 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-01 00:15:55.743670 | orchestrator | ok: [testbed-manager] 2026-04-01 00:15:55.743705 | orchestrator | 2026-04-01 00:15:55.743711 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-04-01 00:15:55.783842 | orchestrator | ok: [testbed-manager] 2026-04-01 00:15:55.783885 | orchestrator | 2026-04-01 00:15:55.783917 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-04-01 00:15:55.851638 | orchestrator | ok: [testbed-manager] 2026-04-01 00:15:55.851680 | orchestrator | 2026-04-01 00:15:55.851688 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-04-01 00:15:56.620470 | orchestrator | changed: [testbed-manager] 2026-04-01 00:15:56.620561 | orchestrator | 2026-04-01 00:15:56.620579 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-04-01 00:15:57.346575 | orchestrator | changed: [testbed-manager] 2026-04-01 00:15:57.346672 | orchestrator | 2026-04-01 00:15:57.346689 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-04-01 00:15:58.718515 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-04-01 00:15:58.718559 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-04-01 00:15:58.718566 | orchestrator | 2026-04-01 00:15:58.718573 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-04-01 00:16:00.084381 | orchestrator | changed: [testbed-manager] 2026-04-01 00:16:00.084472 | orchestrator | 2026-04-01 00:16:00.084487 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-04-01 00:16:01.816716 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-04-01 00:16:01.816807 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-04-01 00:16:01.816839 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-04-01 00:16:01.816852 | orchestrator | 2026-04-01 00:16:01.816865 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-04-01 00:16:01.870758 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:16:01.870845 | orchestrator | 2026-04-01 00:16:01.870861 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-04-01 00:16:01.957338 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:16:01.957419 | orchestrator | 2026-04-01 00:16:01.957434 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-04-01 00:16:02.508876 | orchestrator | changed: [testbed-manager] 2026-04-01 00:16:02.509608 | orchestrator | 2026-04-01 00:16:02.509635 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-04-01 00:16:02.582407 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:16:02.582491 | orchestrator | 2026-04-01 00:16:02.582506 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-04-01 00:16:03.419967 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-01 00:16:03.420066 | orchestrator | changed: [testbed-manager] 2026-04-01 00:16:03.420083 | orchestrator | 2026-04-01 00:16:03.420096 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-04-01 00:16:03.464358 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:16:03.464445 | orchestrator | 2026-04-01 00:16:03.464462 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-04-01 00:16:03.496676 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:16:03.496766 | orchestrator | 2026-04-01 00:16:03.496784 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-04-01 00:16:03.531197 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:16:03.531287 | orchestrator | 2026-04-01 00:16:03.531303 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-04-01 00:16:03.610586 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:16:03.610678 | orchestrator | 2026-04-01 00:16:03.610693 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-04-01 00:16:04.326246 | orchestrator | ok: [testbed-manager] 2026-04-01 00:16:04.326339 | orchestrator | 2026-04-01 00:16:04.326356 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-04-01 00:16:04.326369 | orchestrator | 2026-04-01 00:16:04.326382 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-01 00:16:05.696019 | orchestrator | ok: [testbed-manager] 2026-04-01 00:16:05.696072 | orchestrator | 2026-04-01 00:16:05.696083 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-04-01 00:16:06.653041 | orchestrator | changed: [testbed-manager] 2026-04-01 00:16:06.653136 | orchestrator | 2026-04-01 00:16:06.653161 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 00:16:06.653179 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=10 rescued=0 ignored=0 2026-04-01 00:16:06.653191 | orchestrator | 2026-04-01 00:16:07.058308 | orchestrator | ok: Runtime: 0:10:27.235482 2026-04-01 00:16:07.078271 | 2026-04-01 00:16:07.078425 | TASK [Point out that the log in on the manager is now possible] 2026-04-01 00:16:07.125322 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-04-01 00:16:07.135838 | 2026-04-01 00:16:07.135966 | TASK [Point out that the following task takes some time and does not give any output] 2026-04-01 00:16:07.170311 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-04-01 00:16:07.178619 | 2026-04-01 00:16:07.178733 | TASK [Run manager part 1 + 2] 2026-04-01 00:16:08.324302 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-04-01 00:16:08.388243 | orchestrator | 2026-04-01 00:16:08.388338 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-04-01 00:16:08.388356 | orchestrator | 2026-04-01 00:16:08.388388 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-01 00:16:11.448559 | orchestrator | ok: [testbed-manager] 2026-04-01 00:16:11.448639 | orchestrator | 2026-04-01 00:16:11.448696 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-04-01 00:16:11.487333 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:16:11.487384 | orchestrator | 2026-04-01 00:16:11.487399 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-04-01 00:16:11.530461 | orchestrator | ok: [testbed-manager] 2026-04-01 00:16:11.530525 | orchestrator | 2026-04-01 00:16:11.530545 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-04-01 00:16:11.567134 | orchestrator | ok: [testbed-manager] 2026-04-01 00:16:11.567302 | orchestrator | 2026-04-01 00:16:11.567319 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-04-01 00:16:11.631866 | orchestrator | ok: [testbed-manager] 2026-04-01 00:16:11.631968 | orchestrator | 2026-04-01 00:16:11.631991 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-04-01 00:16:11.704180 | orchestrator | ok: [testbed-manager] 2026-04-01 00:16:11.704242 | orchestrator | 2026-04-01 00:16:11.704258 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-04-01 00:16:11.747586 | orchestrator | included: /home/zuul-testbed01/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-04-01 00:16:11.747644 | orchestrator | 2026-04-01 00:16:11.747659 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-04-01 00:16:12.463094 | orchestrator | ok: [testbed-manager] 2026-04-01 00:16:12.463161 | orchestrator | 2026-04-01 00:16:12.463181 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-04-01 00:16:12.514718 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:16:12.514784 | orchestrator | 2026-04-01 00:16:12.514800 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-04-01 00:16:13.864899 | orchestrator | changed: [testbed-manager] 2026-04-01 00:16:13.864942 | orchestrator | 2026-04-01 00:16:13.864957 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-04-01 00:16:14.441017 | orchestrator | ok: [testbed-manager] 2026-04-01 00:16:14.441054 | orchestrator | 2026-04-01 00:16:14.441062 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-04-01 00:16:15.542693 | orchestrator | changed: [testbed-manager] 2026-04-01 00:16:15.542736 | orchestrator | 2026-04-01 00:16:15.542744 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-04-01 00:16:30.923697 | orchestrator | changed: [testbed-manager] 2026-04-01 00:16:30.923794 | orchestrator | 2026-04-01 00:16:30.923811 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-04-01 00:16:31.594400 | orchestrator | ok: [testbed-manager] 2026-04-01 00:16:31.594492 | orchestrator | 2026-04-01 00:16:31.594511 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-04-01 00:16:31.649277 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:16:31.649330 | orchestrator | 2026-04-01 00:16:31.649338 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-04-01 00:16:32.596370 | orchestrator | changed: [testbed-manager] 2026-04-01 00:16:32.596464 | orchestrator | 2026-04-01 00:16:32.596483 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-04-01 00:16:33.533421 | orchestrator | changed: [testbed-manager] 2026-04-01 00:16:33.533489 | orchestrator | 2026-04-01 00:16:33.533499 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-04-01 00:16:34.065535 | orchestrator | changed: [testbed-manager] 2026-04-01 00:16:34.065575 | orchestrator | 2026-04-01 00:16:34.065581 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-04-01 00:16:34.101364 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-04-01 00:16:34.101532 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-04-01 00:16:34.101554 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-04-01 00:16:34.101568 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-04-01 00:16:36.869749 | orchestrator | changed: [testbed-manager] 2026-04-01 00:16:36.869916 | orchestrator | 2026-04-01 00:16:36.869936 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-04-01 00:16:45.465227 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-04-01 00:16:45.465272 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-04-01 00:16:45.465281 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-04-01 00:16:45.465287 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-04-01 00:16:45.465297 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-04-01 00:16:45.465303 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-04-01 00:16:45.465309 | orchestrator | 2026-04-01 00:16:45.465316 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-04-01 00:16:46.486386 | orchestrator | changed: [testbed-manager] 2026-04-01 00:16:46.486476 | orchestrator | 2026-04-01 00:16:46.486493 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-04-01 00:16:49.477597 | orchestrator | changed: [testbed-manager] 2026-04-01 00:16:49.477640 | orchestrator | 2026-04-01 00:16:49.477649 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-04-01 00:16:49.522754 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:16:49.522796 | orchestrator | 2026-04-01 00:16:49.522805 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-04-01 00:18:28.229109 | orchestrator | changed: [testbed-manager] 2026-04-01 00:18:28.229172 | orchestrator | 2026-04-01 00:18:28.229184 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-04-01 00:18:29.247723 | orchestrator | ok: [testbed-manager] 2026-04-01 00:18:29.247760 | orchestrator | 2026-04-01 00:18:29.247769 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 00:18:29.247776 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=4 rescued=0 ignored=0 2026-04-01 00:18:29.247781 | orchestrator | 2026-04-01 00:18:29.799808 | orchestrator | ok: Runtime: 0:02:21.870923 2026-04-01 00:18:29.818601 | 2026-04-01 00:18:29.818791 | TASK [Reboot manager] 2026-04-01 00:18:31.357654 | orchestrator | ok: Runtime: 0:00:00.893905 2026-04-01 00:18:31.375664 | 2026-04-01 00:18:31.375856 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-04-01 00:18:44.879145 | orchestrator | ok 2026-04-01 00:18:44.889047 | 2026-04-01 00:18:44.889173 | TASK [Wait a little longer for the manager so that everything is ready] 2026-04-01 00:19:44.930749 | orchestrator | ok 2026-04-01 00:19:44.940900 | 2026-04-01 00:19:44.941058 | TASK [Deploy manager + bootstrap nodes] 2026-04-01 00:19:47.291911 | orchestrator | 2026-04-01 00:19:47.292100 | orchestrator | # DEPLOY MANAGER 2026-04-01 00:19:47.292124 | orchestrator | 2026-04-01 00:19:47.292139 | orchestrator | + set -e 2026-04-01 00:19:47.292152 | orchestrator | + echo 2026-04-01 00:19:47.292166 | orchestrator | + echo '# DEPLOY MANAGER' 2026-04-01 00:19:47.292183 | orchestrator | + echo 2026-04-01 00:19:47.292234 | orchestrator | + cat /opt/manager-vars.sh 2026-04-01 00:19:47.294938 | orchestrator | export NUMBER_OF_NODES=6 2026-04-01 00:19:47.294988 | orchestrator | 2026-04-01 00:19:47.294997 | orchestrator | export CEPH_VERSION= 2026-04-01 00:19:47.295004 | orchestrator | export CONFIGURATION_VERSION=main 2026-04-01 00:19:47.295011 | orchestrator | export MANAGER_VERSION=10.0.0 2026-04-01 00:19:47.295018 | orchestrator | export OPENSTACK_VERSION= 2026-04-01 00:19:47.295023 | orchestrator | 2026-04-01 00:19:47.295028 | orchestrator | export ARA=false 2026-04-01 00:19:47.295039 | orchestrator | export DEPLOY_MODE=manager 2026-04-01 00:19:47.295044 | orchestrator | export TEMPEST=true 2026-04-01 00:19:47.295050 | orchestrator | export IS_ZUUL=true 2026-04-01 00:19:47.295059 | orchestrator | 2026-04-01 00:19:47.295069 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.126 2026-04-01 00:19:47.295099 | orchestrator | export EXTERNAL_API=false 2026-04-01 00:19:47.295104 | orchestrator | 2026-04-01 00:19:47.295113 | orchestrator | export IMAGE_USER=ubuntu 2026-04-01 00:19:47.295119 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-04-01 00:19:47.295124 | orchestrator | 2026-04-01 00:19:47.295133 | orchestrator | export CEPH_STACK=ceph-ansible 2026-04-01 00:19:47.295219 | orchestrator | 2026-04-01 00:19:47.295228 | orchestrator | + echo 2026-04-01 00:19:47.295233 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-01 00:19:47.296295 | orchestrator | ++ export INTERACTIVE=false 2026-04-01 00:19:47.296306 | orchestrator | ++ INTERACTIVE=false 2026-04-01 00:19:47.296312 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-01 00:19:47.296319 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-01 00:19:47.296448 | orchestrator | + source /opt/manager-vars.sh 2026-04-01 00:19:47.296457 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-01 00:19:47.296462 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-01 00:19:47.296566 | orchestrator | ++ export CEPH_VERSION= 2026-04-01 00:19:47.296574 | orchestrator | ++ CEPH_VERSION= 2026-04-01 00:19:47.296579 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-01 00:19:47.296584 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-01 00:19:47.296605 | orchestrator | ++ export MANAGER_VERSION=10.0.0 2026-04-01 00:19:47.296659 | orchestrator | ++ MANAGER_VERSION=10.0.0 2026-04-01 00:19:47.296665 | orchestrator | ++ export OPENSTACK_VERSION= 2026-04-01 00:19:47.296670 | orchestrator | ++ OPENSTACK_VERSION= 2026-04-01 00:19:47.296725 | orchestrator | ++ export ARA=false 2026-04-01 00:19:47.296731 | orchestrator | ++ ARA=false 2026-04-01 00:19:47.296736 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-01 00:19:47.296748 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-01 00:19:47.296753 | orchestrator | ++ export TEMPEST=true 2026-04-01 00:19:47.296876 | orchestrator | ++ TEMPEST=true 2026-04-01 00:19:47.296884 | orchestrator | ++ export IS_ZUUL=true 2026-04-01 00:19:47.296889 | orchestrator | ++ IS_ZUUL=true 2026-04-01 00:19:47.296894 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.126 2026-04-01 00:19:47.296900 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.126 2026-04-01 00:19:47.296905 | orchestrator | ++ export EXTERNAL_API=false 2026-04-01 00:19:47.296910 | orchestrator | ++ EXTERNAL_API=false 2026-04-01 00:19:47.296915 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-01 00:19:47.296920 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-01 00:19:47.296925 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-01 00:19:47.296931 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-01 00:19:47.296936 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-01 00:19:47.296941 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-01 00:19:47.296946 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-04-01 00:19:47.349423 | orchestrator | + docker version 2026-04-01 00:19:47.454260 | orchestrator | Client: Docker Engine - Community 2026-04-01 00:19:47.454364 | orchestrator | Version: 27.5.1 2026-04-01 00:19:47.454380 | orchestrator | API version: 1.47 2026-04-01 00:19:47.454392 | orchestrator | Go version: go1.22.11 2026-04-01 00:19:47.454402 | orchestrator | Git commit: 9f9e405 2026-04-01 00:19:47.454414 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-04-01 00:19:47.454426 | orchestrator | OS/Arch: linux/amd64 2026-04-01 00:19:47.454438 | orchestrator | Context: default 2026-04-01 00:19:47.454449 | orchestrator | 2026-04-01 00:19:47.454460 | orchestrator | Server: Docker Engine - Community 2026-04-01 00:19:47.454471 | orchestrator | Engine: 2026-04-01 00:19:47.454482 | orchestrator | Version: 27.5.1 2026-04-01 00:19:47.454493 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-04-01 00:19:47.454534 | orchestrator | Go version: go1.22.11 2026-04-01 00:19:47.454546 | orchestrator | Git commit: 4c9b3b0 2026-04-01 00:19:47.454557 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-04-01 00:19:47.454568 | orchestrator | OS/Arch: linux/amd64 2026-04-01 00:19:47.454578 | orchestrator | Experimental: false 2026-04-01 00:19:47.454635 | orchestrator | containerd: 2026-04-01 00:19:47.454649 | orchestrator | Version: v2.2.2 2026-04-01 00:19:47.454660 | orchestrator | GitCommit: 301b2dac98f15c27117da5c8af12118a041a31d9 2026-04-01 00:19:47.454671 | orchestrator | runc: 2026-04-01 00:19:47.454682 | orchestrator | Version: 1.3.4 2026-04-01 00:19:47.454693 | orchestrator | GitCommit: v1.3.4-0-gd6d73eb8 2026-04-01 00:19:47.454704 | orchestrator | docker-init: 2026-04-01 00:19:47.454715 | orchestrator | Version: 0.19.0 2026-04-01 00:19:47.454727 | orchestrator | GitCommit: de40ad0 2026-04-01 00:19:47.456636 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-04-01 00:19:47.466353 | orchestrator | + set -e 2026-04-01 00:19:47.466421 | orchestrator | + source /opt/manager-vars.sh 2026-04-01 00:19:47.466435 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-01 00:19:47.466446 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-01 00:19:47.466457 | orchestrator | ++ export CEPH_VERSION= 2026-04-01 00:19:47.466468 | orchestrator | ++ CEPH_VERSION= 2026-04-01 00:19:47.466479 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-01 00:19:47.466491 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-01 00:19:47.466502 | orchestrator | ++ export MANAGER_VERSION=10.0.0 2026-04-01 00:19:47.466515 | orchestrator | ++ MANAGER_VERSION=10.0.0 2026-04-01 00:19:47.466525 | orchestrator | ++ export OPENSTACK_VERSION= 2026-04-01 00:19:47.466536 | orchestrator | ++ OPENSTACK_VERSION= 2026-04-01 00:19:47.466574 | orchestrator | ++ export ARA=false 2026-04-01 00:19:47.466633 | orchestrator | ++ ARA=false 2026-04-01 00:19:47.466647 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-01 00:19:47.466658 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-01 00:19:47.466669 | orchestrator | ++ export TEMPEST=true 2026-04-01 00:19:47.466685 | orchestrator | ++ TEMPEST=true 2026-04-01 00:19:47.466704 | orchestrator | ++ export IS_ZUUL=true 2026-04-01 00:19:47.466722 | orchestrator | ++ IS_ZUUL=true 2026-04-01 00:19:47.466740 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.126 2026-04-01 00:19:47.466758 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.126 2026-04-01 00:19:47.466777 | orchestrator | ++ export EXTERNAL_API=false 2026-04-01 00:19:47.466793 | orchestrator | ++ EXTERNAL_API=false 2026-04-01 00:19:47.466811 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-01 00:19:47.466830 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-01 00:19:47.466849 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-01 00:19:47.466868 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-01 00:19:47.466885 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-01 00:19:47.466913 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-01 00:19:47.466924 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-01 00:19:47.466935 | orchestrator | ++ export INTERACTIVE=false 2026-04-01 00:19:47.466947 | orchestrator | ++ INTERACTIVE=false 2026-04-01 00:19:47.466957 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-01 00:19:47.466973 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-01 00:19:47.466985 | orchestrator | + [[ 10.0.0 != \l\a\t\e\s\t ]] 2026-04-01 00:19:47.466996 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 10.0.0 2026-04-01 00:19:47.473385 | orchestrator | + set -e 2026-04-01 00:19:47.473441 | orchestrator | + VERSION=10.0.0 2026-04-01 00:19:47.473466 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 10.0.0/g' /opt/configuration/environments/manager/configuration.yml 2026-04-01 00:19:47.482522 | orchestrator | + [[ 10.0.0 != \l\a\t\e\s\t ]] 2026-04-01 00:19:47.482633 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2026-04-01 00:19:47.487290 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2026-04-01 00:19:47.491658 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-04-01 00:19:47.499943 | orchestrator | /opt/configuration ~ 2026-04-01 00:19:47.499994 | orchestrator | + set -e 2026-04-01 00:19:47.500007 | orchestrator | + pushd /opt/configuration 2026-04-01 00:19:47.500018 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-01 00:19:47.501126 | orchestrator | + source /opt/venv/bin/activate 2026-04-01 00:19:47.502064 | orchestrator | ++ deactivate nondestructive 2026-04-01 00:19:47.502104 | orchestrator | ++ '[' -n '' ']' 2026-04-01 00:19:47.502117 | orchestrator | ++ '[' -n '' ']' 2026-04-01 00:19:47.502144 | orchestrator | ++ hash -r 2026-04-01 00:19:47.502275 | orchestrator | ++ '[' -n '' ']' 2026-04-01 00:19:47.502301 | orchestrator | ++ unset VIRTUAL_ENV 2026-04-01 00:19:47.502313 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-04-01 00:19:47.502324 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-04-01 00:19:47.502350 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-04-01 00:19:47.502387 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-04-01 00:19:47.502406 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-04-01 00:19:47.502424 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-04-01 00:19:47.502445 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-01 00:19:47.502465 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-01 00:19:47.502484 | orchestrator | ++ export PATH 2026-04-01 00:19:47.502554 | orchestrator | ++ '[' -n '' ']' 2026-04-01 00:19:47.503081 | orchestrator | ++ '[' -z '' ']' 2026-04-01 00:19:47.503125 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-04-01 00:19:47.503139 | orchestrator | ++ PS1='(venv) ' 2026-04-01 00:19:47.503152 | orchestrator | ++ export PS1 2026-04-01 00:19:47.503164 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-04-01 00:19:47.503175 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-04-01 00:19:47.503186 | orchestrator | ++ hash -r 2026-04-01 00:19:47.503204 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-04-01 00:19:48.430484 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-04-01 00:19:48.430899 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.33.1) 2026-04-01 00:19:48.432196 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-04-01 00:19:48.433547 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-04-01 00:19:48.435016 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.0) 2026-04-01 00:19:48.444656 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.1) 2026-04-01 00:19:48.445982 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-04-01 00:19:48.446994 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-04-01 00:19:48.448332 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-04-01 00:19:48.472824 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.6) 2026-04-01 00:19:48.474059 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-04-01 00:19:48.475688 | orchestrator | Requirement already satisfied: urllib3<3,>=1.26 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-04-01 00:19:48.476936 | orchestrator | Requirement already satisfied: certifi>=2023.5.7 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.2.25) 2026-04-01 00:19:48.480814 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-04-01 00:19:48.648480 | orchestrator | ++ which gilt 2026-04-01 00:19:48.651845 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-04-01 00:19:48.651900 | orchestrator | + /opt/venv/bin/gilt overlay 2026-04-01 00:19:48.848176 | orchestrator | osism.cfg-generics: 2026-04-01 00:19:48.978967 | orchestrator | - copied (v0.20260319.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-04-01 00:19:48.979072 | orchestrator | - copied (v0.20260319.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-04-01 00:19:48.979379 | orchestrator | - copied (v0.20260319.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-04-01 00:19:48.979413 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-04-01 00:19:49.681249 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-04-01 00:19:49.691736 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-04-01 00:19:49.992458 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-04-01 00:19:50.027962 | orchestrator | ~ 2026-04-01 00:19:50.028078 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-01 00:19:50.028094 | orchestrator | + deactivate 2026-04-01 00:19:50.028112 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-04-01 00:19:50.028127 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-01 00:19:50.028138 | orchestrator | + export PATH 2026-04-01 00:19:50.028150 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-04-01 00:19:50.028162 | orchestrator | + '[' -n '' ']' 2026-04-01 00:19:50.028173 | orchestrator | + hash -r 2026-04-01 00:19:50.028183 | orchestrator | + '[' -n '' ']' 2026-04-01 00:19:50.028194 | orchestrator | + unset VIRTUAL_ENV 2026-04-01 00:19:50.028205 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-04-01 00:19:50.028216 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-04-01 00:19:50.028227 | orchestrator | + unset -f deactivate 2026-04-01 00:19:50.028239 | orchestrator | + popd 2026-04-01 00:19:50.028504 | orchestrator | + [[ 10.0.0 == \l\a\t\e\s\t ]] 2026-04-01 00:19:50.028525 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-04-01 00:19:50.028839 | orchestrator | ++ semver 10.0.0 7.0.0 2026-04-01 00:19:50.071778 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-01 00:19:50.071942 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-04-01 00:19:50.072564 | orchestrator | ++ semver 10.0.0 10.0.0-0 2026-04-01 00:19:50.147844 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-01 00:19:50.147977 | orchestrator | + sed -i '/^om_enable_rabbitmq_high_availability:/d' /opt/configuration/environments/kolla/configuration.yml 2026-04-01 00:19:50.152617 | orchestrator | + sed -i '/^om_enable_rabbitmq_quorum_queues:/d' /opt/configuration/environments/kolla/configuration.yml 2026-04-01 00:19:50.157061 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-04-01 00:19:50.229339 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-01 00:19:50.229484 | orchestrator | + source /opt/venv/bin/activate 2026-04-01 00:19:50.229501 | orchestrator | ++ deactivate nondestructive 2026-04-01 00:19:50.229533 | orchestrator | ++ '[' -n '' ']' 2026-04-01 00:19:50.229543 | orchestrator | ++ '[' -n '' ']' 2026-04-01 00:19:50.229565 | orchestrator | ++ hash -r 2026-04-01 00:19:50.229575 | orchestrator | ++ '[' -n '' ']' 2026-04-01 00:19:50.229618 | orchestrator | ++ unset VIRTUAL_ENV 2026-04-01 00:19:50.229630 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-04-01 00:19:50.229640 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-04-01 00:19:50.229650 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-04-01 00:19:50.229677 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-04-01 00:19:50.229697 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-04-01 00:19:50.229716 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-04-01 00:19:50.229737 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-01 00:19:50.229748 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-01 00:19:50.229767 | orchestrator | ++ export PATH 2026-04-01 00:19:50.229781 | orchestrator | ++ '[' -n '' ']' 2026-04-01 00:19:50.229791 | orchestrator | ++ '[' -z '' ']' 2026-04-01 00:19:50.229800 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-04-01 00:19:50.229810 | orchestrator | ++ PS1='(venv) ' 2026-04-01 00:19:50.229819 | orchestrator | ++ export PS1 2026-04-01 00:19:50.229829 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-04-01 00:19:50.229842 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-04-01 00:19:50.229852 | orchestrator | ++ hash -r 2026-04-01 00:19:50.230152 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-04-01 00:19:51.080629 | orchestrator | 2026-04-01 00:19:51.080728 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-04-01 00:19:51.080741 | orchestrator | 2026-04-01 00:19:51.080750 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-04-01 00:19:51.570768 | orchestrator | ok: [testbed-manager] 2026-04-01 00:19:51.570890 | orchestrator | 2026-04-01 00:19:51.570925 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-04-01 00:19:52.450984 | orchestrator | changed: [testbed-manager] 2026-04-01 00:19:52.451089 | orchestrator | 2026-04-01 00:19:52.451106 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-04-01 00:19:52.451119 | orchestrator | 2026-04-01 00:19:52.451130 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-01 00:19:54.475248 | orchestrator | ok: [testbed-manager] 2026-04-01 00:19:54.475340 | orchestrator | 2026-04-01 00:19:54.475351 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-04-01 00:19:54.524636 | orchestrator | ok: [testbed-manager] 2026-04-01 00:19:54.524740 | orchestrator | 2026-04-01 00:19:54.524758 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-04-01 00:19:54.936039 | orchestrator | changed: [testbed-manager] 2026-04-01 00:19:54.936166 | orchestrator | 2026-04-01 00:19:54.936185 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-04-01 00:19:54.978289 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:19:54.978414 | orchestrator | 2026-04-01 00:19:54.978440 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-04-01 00:19:55.277241 | orchestrator | changed: [testbed-manager] 2026-04-01 00:19:55.277343 | orchestrator | 2026-04-01 00:19:55.277359 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-04-01 00:19:55.564802 | orchestrator | ok: [testbed-manager] 2026-04-01 00:19:55.564902 | orchestrator | 2026-04-01 00:19:55.564918 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-04-01 00:19:55.671076 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:19:55.671166 | orchestrator | 2026-04-01 00:19:55.671181 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-04-01 00:19:55.671193 | orchestrator | 2026-04-01 00:19:55.671205 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-01 00:19:57.253247 | orchestrator | ok: [testbed-manager] 2026-04-01 00:19:57.253358 | orchestrator | 2026-04-01 00:19:57.253375 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-04-01 00:19:57.344894 | orchestrator | included: osism.services.traefik for testbed-manager 2026-04-01 00:19:57.345004 | orchestrator | 2026-04-01 00:19:57.345022 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-04-01 00:19:57.393149 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-04-01 00:19:57.393234 | orchestrator | 2026-04-01 00:19:57.393245 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-04-01 00:19:58.380429 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-04-01 00:19:58.380691 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-04-01 00:19:58.380732 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-04-01 00:19:58.380750 | orchestrator | 2026-04-01 00:19:58.380770 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-04-01 00:20:00.032888 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-04-01 00:20:00.033011 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-04-01 00:20:00.033029 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-04-01 00:20:00.033043 | orchestrator | 2026-04-01 00:20:00.033101 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-04-01 00:20:00.601825 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-01 00:20:00.601943 | orchestrator | changed: [testbed-manager] 2026-04-01 00:20:00.601973 | orchestrator | 2026-04-01 00:20:00.601993 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-04-01 00:20:01.163125 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-01 00:20:01.163221 | orchestrator | changed: [testbed-manager] 2026-04-01 00:20:01.163236 | orchestrator | 2026-04-01 00:20:01.163249 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-04-01 00:20:01.210373 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:20:01.210474 | orchestrator | 2026-04-01 00:20:01.210491 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-04-01 00:20:01.530385 | orchestrator | ok: [testbed-manager] 2026-04-01 00:20:01.530532 | orchestrator | 2026-04-01 00:20:01.530560 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-04-01 00:20:01.598327 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-04-01 00:20:01.598417 | orchestrator | 2026-04-01 00:20:01.598432 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-04-01 00:20:02.532768 | orchestrator | changed: [testbed-manager] 2026-04-01 00:20:02.532853 | orchestrator | 2026-04-01 00:20:02.532865 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-04-01 00:20:03.256053 | orchestrator | changed: [testbed-manager] 2026-04-01 00:20:03.256152 | orchestrator | 2026-04-01 00:20:03.256168 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-04-01 00:20:13.223928 | orchestrator | changed: [testbed-manager] 2026-04-01 00:20:13.224014 | orchestrator | 2026-04-01 00:20:13.224025 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-04-01 00:20:13.274013 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:20:13.274189 | orchestrator | 2026-04-01 00:20:13.274206 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-04-01 00:20:13.274217 | orchestrator | 2026-04-01 00:20:13.274227 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-01 00:20:15.927717 | orchestrator | ok: [testbed-manager] 2026-04-01 00:20:15.927822 | orchestrator | 2026-04-01 00:20:15.927839 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-04-01 00:20:16.051191 | orchestrator | included: osism.services.manager for testbed-manager 2026-04-01 00:20:16.051283 | orchestrator | 2026-04-01 00:20:16.051298 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-04-01 00:20:16.123808 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-04-01 00:20:16.123903 | orchestrator | 2026-04-01 00:20:16.123918 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-04-01 00:20:18.137447 | orchestrator | ok: [testbed-manager] 2026-04-01 00:20:18.137608 | orchestrator | 2026-04-01 00:20:18.137628 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-04-01 00:20:18.193124 | orchestrator | ok: [testbed-manager] 2026-04-01 00:20:18.193228 | orchestrator | 2026-04-01 00:20:18.193244 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-04-01 00:20:18.308189 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-04-01 00:20:18.308290 | orchestrator | 2026-04-01 00:20:18.308327 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-04-01 00:20:21.076074 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-04-01 00:20:21.076193 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-04-01 00:20:21.076218 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-04-01 00:20:21.076230 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-04-01 00:20:21.076241 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-04-01 00:20:21.076251 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-04-01 00:20:21.076261 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-04-01 00:20:21.076271 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-04-01 00:20:21.076281 | orchestrator | 2026-04-01 00:20:21.076292 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-04-01 00:20:21.718660 | orchestrator | changed: [testbed-manager] 2026-04-01 00:20:21.718759 | orchestrator | 2026-04-01 00:20:21.718776 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-04-01 00:20:22.350482 | orchestrator | changed: [testbed-manager] 2026-04-01 00:20:22.350605 | orchestrator | 2026-04-01 00:20:22.350633 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-04-01 00:20:22.429986 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-04-01 00:20:22.430128 | orchestrator | 2026-04-01 00:20:22.430139 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-04-01 00:20:23.627666 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-04-01 00:20:23.627778 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-04-01 00:20:23.627793 | orchestrator | 2026-04-01 00:20:23.627805 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-04-01 00:20:24.230536 | orchestrator | changed: [testbed-manager] 2026-04-01 00:20:24.230646 | orchestrator | 2026-04-01 00:20:24.230655 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-04-01 00:20:24.290416 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:20:24.290520 | orchestrator | 2026-04-01 00:20:24.290537 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-04-01 00:20:24.371555 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-04-01 00:20:24.371677 | orchestrator | 2026-04-01 00:20:24.371686 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-04-01 00:20:24.994651 | orchestrator | changed: [testbed-manager] 2026-04-01 00:20:24.994767 | orchestrator | 2026-04-01 00:20:24.994794 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-04-01 00:20:25.051011 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-04-01 00:20:25.051113 | orchestrator | 2026-04-01 00:20:25.051129 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-04-01 00:20:26.377209 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-01 00:20:26.377294 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-01 00:20:26.377305 | orchestrator | changed: [testbed-manager] 2026-04-01 00:20:26.377314 | orchestrator | 2026-04-01 00:20:26.377322 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-04-01 00:20:27.006860 | orchestrator | changed: [testbed-manager] 2026-04-01 00:20:27.006961 | orchestrator | 2026-04-01 00:20:27.006977 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-04-01 00:20:27.061339 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:20:27.061452 | orchestrator | 2026-04-01 00:20:27.061473 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-04-01 00:20:27.163404 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-04-01 00:20:27.163502 | orchestrator | 2026-04-01 00:20:27.163518 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-04-01 00:20:27.674195 | orchestrator | changed: [testbed-manager] 2026-04-01 00:20:27.674274 | orchestrator | 2026-04-01 00:20:27.674284 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-04-01 00:20:28.083642 | orchestrator | changed: [testbed-manager] 2026-04-01 00:20:28.083737 | orchestrator | 2026-04-01 00:20:28.083753 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-04-01 00:20:29.297040 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-04-01 00:20:29.297146 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-04-01 00:20:29.297162 | orchestrator | 2026-04-01 00:20:29.297175 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-04-01 00:20:29.917178 | orchestrator | changed: [testbed-manager] 2026-04-01 00:20:29.917299 | orchestrator | 2026-04-01 00:20:29.917324 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-04-01 00:20:30.286868 | orchestrator | ok: [testbed-manager] 2026-04-01 00:20:30.287025 | orchestrator | 2026-04-01 00:20:30.287045 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-04-01 00:20:30.646746 | orchestrator | changed: [testbed-manager] 2026-04-01 00:20:30.646845 | orchestrator | 2026-04-01 00:20:30.646861 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-04-01 00:20:30.696224 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:20:30.696329 | orchestrator | 2026-04-01 00:20:30.696346 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-04-01 00:20:30.763909 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-04-01 00:20:30.764011 | orchestrator | 2026-04-01 00:20:30.764028 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-04-01 00:20:30.806006 | orchestrator | ok: [testbed-manager] 2026-04-01 00:20:30.806129 | orchestrator | 2026-04-01 00:20:30.806140 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-04-01 00:20:32.736690 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-04-01 00:20:32.736793 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-04-01 00:20:32.736810 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-04-01 00:20:32.736821 | orchestrator | 2026-04-01 00:20:32.736834 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-04-01 00:20:33.423233 | orchestrator | changed: [testbed-manager] 2026-04-01 00:20:33.423329 | orchestrator | 2026-04-01 00:20:33.423345 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-04-01 00:20:34.097819 | orchestrator | changed: [testbed-manager] 2026-04-01 00:20:34.097944 | orchestrator | 2026-04-01 00:20:34.097962 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-04-01 00:20:34.795618 | orchestrator | changed: [testbed-manager] 2026-04-01 00:20:34.795702 | orchestrator | 2026-04-01 00:20:34.795722 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-04-01 00:20:34.876505 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-04-01 00:20:34.876643 | orchestrator | 2026-04-01 00:20:34.876661 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-04-01 00:20:34.921551 | orchestrator | ok: [testbed-manager] 2026-04-01 00:20:34.921714 | orchestrator | 2026-04-01 00:20:34.921731 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-04-01 00:20:35.608300 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-04-01 00:20:35.608401 | orchestrator | 2026-04-01 00:20:35.608417 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-04-01 00:20:35.685632 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-04-01 00:20:35.685741 | orchestrator | 2026-04-01 00:20:35.685765 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-04-01 00:20:36.368624 | orchestrator | changed: [testbed-manager] 2026-04-01 00:20:36.368728 | orchestrator | 2026-04-01 00:20:36.368746 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-04-01 00:20:36.971870 | orchestrator | ok: [testbed-manager] 2026-04-01 00:20:36.971969 | orchestrator | 2026-04-01 00:20:36.971984 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-04-01 00:20:37.018067 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:20:37.018151 | orchestrator | 2026-04-01 00:20:37.018169 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-04-01 00:20:37.063653 | orchestrator | ok: [testbed-manager] 2026-04-01 00:20:37.063761 | orchestrator | 2026-04-01 00:20:37.063782 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-04-01 00:20:37.873160 | orchestrator | changed: [testbed-manager] 2026-04-01 00:20:37.873230 | orchestrator | 2026-04-01 00:20:37.873238 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-04-01 00:21:47.317626 | orchestrator | changed: [testbed-manager] 2026-04-01 00:21:47.317741 | orchestrator | 2026-04-01 00:21:47.317759 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-04-01 00:21:48.256309 | orchestrator | ok: [testbed-manager] 2026-04-01 00:21:48.256397 | orchestrator | 2026-04-01 00:21:48.256410 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-04-01 00:21:48.317496 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:21:48.317662 | orchestrator | 2026-04-01 00:21:48.317679 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-04-01 00:21:50.895031 | orchestrator | changed: [testbed-manager] 2026-04-01 00:21:50.895123 | orchestrator | 2026-04-01 00:21:50.895138 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-04-01 00:21:50.956844 | orchestrator | ok: [testbed-manager] 2026-04-01 00:21:50.956939 | orchestrator | 2026-04-01 00:21:50.956955 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-04-01 00:21:50.956967 | orchestrator | 2026-04-01 00:21:50.956979 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-04-01 00:21:51.109073 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:21:51.109169 | orchestrator | 2026-04-01 00:21:51.109184 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-04-01 00:22:51.167446 | orchestrator | Pausing for 60 seconds 2026-04-01 00:22:51.167601 | orchestrator | changed: [testbed-manager] 2026-04-01 00:22:51.167618 | orchestrator | 2026-04-01 00:22:51.167631 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-04-01 00:22:54.147146 | orchestrator | changed: [testbed-manager] 2026-04-01 00:22:54.147254 | orchestrator | 2026-04-01 00:22:54.147271 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-04-01 00:23:56.044603 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-04-01 00:23:56.044722 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-04-01 00:23:56.044738 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (48 retries left). 2026-04-01 00:23:56.044750 | orchestrator | changed: [testbed-manager] 2026-04-01 00:23:56.044763 | orchestrator | 2026-04-01 00:23:56.044775 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-04-01 00:24:01.647223 | orchestrator | changed: [testbed-manager] 2026-04-01 00:24:01.647354 | orchestrator | 2026-04-01 00:24:01.647374 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-04-01 00:24:01.745105 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-04-01 00:24:01.745203 | orchestrator | 2026-04-01 00:24:01.745218 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-04-01 00:24:01.745230 | orchestrator | 2026-04-01 00:24:01.745242 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-04-01 00:24:01.796072 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:24:01.796168 | orchestrator | 2026-04-01 00:24:01.796187 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-04-01 00:24:01.867050 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-04-01 00:24:01.867154 | orchestrator | 2026-04-01 00:24:01.867170 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-04-01 00:24:02.663673 | orchestrator | changed: [testbed-manager] 2026-04-01 00:24:02.663771 | orchestrator | 2026-04-01 00:24:02.663788 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-04-01 00:24:05.934182 | orchestrator | ok: [testbed-manager] 2026-04-01 00:24:05.934275 | orchestrator | 2026-04-01 00:24:05.934288 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-04-01 00:24:06.008188 | orchestrator | ok: [testbed-manager] => { 2026-04-01 00:24:06.008301 | orchestrator | "version_check_result.stdout_lines": [ 2026-04-01 00:24:06.008317 | orchestrator | "=== OSISM Container Version Check ===", 2026-04-01 00:24:06.008329 | orchestrator | "Checking running containers against expected versions...", 2026-04-01 00:24:06.008340 | orchestrator | "", 2026-04-01 00:24:06.008347 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-04-01 00:24:06.008354 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:0.20260322.0", 2026-04-01 00:24:06.008362 | orchestrator | " Enabled: true", 2026-04-01 00:24:06.008369 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:0.20260322.0", 2026-04-01 00:24:06.008375 | orchestrator | " Status: ✅ MATCH", 2026-04-01 00:24:06.008509 | orchestrator | "", 2026-04-01 00:24:06.008563 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-04-01 00:24:06.008576 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:0.20260322.0", 2026-04-01 00:24:06.008587 | orchestrator | " Enabled: true", 2026-04-01 00:24:06.008593 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:0.20260322.0", 2026-04-01 00:24:06.008600 | orchestrator | " Status: ✅ MATCH", 2026-04-01 00:24:06.008606 | orchestrator | "", 2026-04-01 00:24:06.008612 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-04-01 00:24:06.008619 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:0.20260322.0", 2026-04-01 00:24:06.008625 | orchestrator | " Enabled: true", 2026-04-01 00:24:06.008631 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:0.20260322.0", 2026-04-01 00:24:06.008637 | orchestrator | " Status: ✅ MATCH", 2026-04-01 00:24:06.008644 | orchestrator | "", 2026-04-01 00:24:06.008650 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-04-01 00:24:06.008657 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:0.20260322.0", 2026-04-01 00:24:06.008663 | orchestrator | " Enabled: true", 2026-04-01 00:24:06.008669 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:0.20260322.0", 2026-04-01 00:24:06.008675 | orchestrator | " Status: ✅ MATCH", 2026-04-01 00:24:06.008682 | orchestrator | "", 2026-04-01 00:24:06.008688 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-04-01 00:24:06.008694 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:0.20260328.0", 2026-04-01 00:24:06.008700 | orchestrator | " Enabled: true", 2026-04-01 00:24:06.008706 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:0.20260328.0", 2026-04-01 00:24:06.008712 | orchestrator | " Status: ✅ MATCH", 2026-04-01 00:24:06.008719 | orchestrator | "", 2026-04-01 00:24:06.008725 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-04-01 00:24:06.008731 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-01 00:24:06.008737 | orchestrator | " Enabled: true", 2026-04-01 00:24:06.008744 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-01 00:24:06.008750 | orchestrator | " Status: ✅ MATCH", 2026-04-01 00:24:06.008756 | orchestrator | "", 2026-04-01 00:24:06.008763 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-04-01 00:24:06.008769 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-04-01 00:24:06.008775 | orchestrator | " Enabled: true", 2026-04-01 00:24:06.008782 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-04-01 00:24:06.008788 | orchestrator | " Status: ✅ MATCH", 2026-04-01 00:24:06.008795 | orchestrator | "", 2026-04-01 00:24:06.008801 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-04-01 00:24:06.008807 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-04-01 00:24:06.008813 | orchestrator | " Enabled: true", 2026-04-01 00:24:06.008819 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-04-01 00:24:06.008826 | orchestrator | " Status: ✅ MATCH", 2026-04-01 00:24:06.008832 | orchestrator | "", 2026-04-01 00:24:06.008838 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-04-01 00:24:06.008844 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:0.20260320.0", 2026-04-01 00:24:06.008851 | orchestrator | " Enabled: true", 2026-04-01 00:24:06.008857 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:0.20260320.0", 2026-04-01 00:24:06.008863 | orchestrator | " Status: ✅ MATCH", 2026-04-01 00:24:06.008869 | orchestrator | "", 2026-04-01 00:24:06.008876 | orchestrator | "Checking service: redis (Redis Cache)", 2026-04-01 00:24:06.008882 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-04-01 00:24:06.008888 | orchestrator | " Enabled: true", 2026-04-01 00:24:06.008895 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-04-01 00:24:06.008906 | orchestrator | " Status: ✅ MATCH", 2026-04-01 00:24:06.008927 | orchestrator | "", 2026-04-01 00:24:06.008938 | orchestrator | "Checking service: api (OSISM API Service)", 2026-04-01 00:24:06.008949 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-01 00:24:06.008960 | orchestrator | " Enabled: true", 2026-04-01 00:24:06.008971 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-01 00:24:06.008987 | orchestrator | " Status: ✅ MATCH", 2026-04-01 00:24:06.008998 | orchestrator | "", 2026-04-01 00:24:06.009009 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-04-01 00:24:06.009020 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-01 00:24:06.009031 | orchestrator | " Enabled: true", 2026-04-01 00:24:06.009042 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-01 00:24:06.009053 | orchestrator | " Status: ✅ MATCH", 2026-04-01 00:24:06.009065 | orchestrator | "", 2026-04-01 00:24:06.009076 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-04-01 00:24:06.009093 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-01 00:24:06.009112 | orchestrator | " Enabled: true", 2026-04-01 00:24:06.009131 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-01 00:24:06.009149 | orchestrator | " Status: ✅ MATCH", 2026-04-01 00:24:06.009168 | orchestrator | "", 2026-04-01 00:24:06.009186 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-04-01 00:24:06.009205 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-01 00:24:06.009223 | orchestrator | " Enabled: true", 2026-04-01 00:24:06.009242 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-01 00:24:06.009282 | orchestrator | " Status: ✅ MATCH", 2026-04-01 00:24:06.009294 | orchestrator | "", 2026-04-01 00:24:06.009306 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-04-01 00:24:06.009316 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-01 00:24:06.009327 | orchestrator | " Enabled: true", 2026-04-01 00:24:06.009338 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-01 00:24:06.009349 | orchestrator | " Status: ✅ MATCH", 2026-04-01 00:24:06.009360 | orchestrator | "", 2026-04-01 00:24:06.009371 | orchestrator | "=== Summary ===", 2026-04-01 00:24:06.009382 | orchestrator | "Errors (version mismatches): 0", 2026-04-01 00:24:06.009393 | orchestrator | "Warnings (expected containers not running): 0", 2026-04-01 00:24:06.009404 | orchestrator | "", 2026-04-01 00:24:06.009447 | orchestrator | "✅ All running containers match expected versions!" 2026-04-01 00:24:06.009459 | orchestrator | ] 2026-04-01 00:24:06.009470 | orchestrator | } 2026-04-01 00:24:06.009482 | orchestrator | 2026-04-01 00:24:06.009494 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-04-01 00:24:06.059348 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:24:06.059479 | orchestrator | 2026-04-01 00:24:06.059504 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 00:24:06.059525 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2026-04-01 00:24:06.059542 | orchestrator | 2026-04-01 00:24:06.177187 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-01 00:24:06.177278 | orchestrator | + deactivate 2026-04-01 00:24:06.177293 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-04-01 00:24:06.177306 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-01 00:24:06.177317 | orchestrator | + export PATH 2026-04-01 00:24:06.177328 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-04-01 00:24:06.177339 | orchestrator | + '[' -n '' ']' 2026-04-01 00:24:06.177350 | orchestrator | + hash -r 2026-04-01 00:24:06.177361 | orchestrator | + '[' -n '' ']' 2026-04-01 00:24:06.177372 | orchestrator | + unset VIRTUAL_ENV 2026-04-01 00:24:06.177382 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-04-01 00:24:06.177393 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-04-01 00:24:06.177404 | orchestrator | + unset -f deactivate 2026-04-01 00:24:06.177470 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-04-01 00:24:06.185080 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-04-01 00:24:06.185110 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-04-01 00:24:06.185121 | orchestrator | + local max_attempts=60 2026-04-01 00:24:06.185132 | orchestrator | + local name=ceph-ansible 2026-04-01 00:24:06.185143 | orchestrator | + local attempt_num=1 2026-04-01 00:24:06.186219 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-01 00:24:06.217378 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-01 00:24:06.217544 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-04-01 00:24:06.217567 | orchestrator | + local max_attempts=60 2026-04-01 00:24:06.217586 | orchestrator | + local name=kolla-ansible 2026-04-01 00:24:06.217605 | orchestrator | + local attempt_num=1 2026-04-01 00:24:06.218267 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-04-01 00:24:06.252588 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-01 00:24:06.252679 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-04-01 00:24:06.252694 | orchestrator | + local max_attempts=60 2026-04-01 00:24:06.252707 | orchestrator | + local name=osism-ansible 2026-04-01 00:24:06.252718 | orchestrator | + local attempt_num=1 2026-04-01 00:24:06.253554 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-04-01 00:24:06.298288 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-01 00:24:06.298381 | orchestrator | + [[ true == \t\r\u\e ]] 2026-04-01 00:24:06.298396 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-04-01 00:24:07.039138 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-04-01 00:24:07.210343 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-04-01 00:24:07.210481 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20260322.0 "/entrypoint.sh osis…" ceph-ansible 2 minutes ago Up About a minute (healthy) 2026-04-01 00:24:07.210497 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20260328.0 "/entrypoint.sh osis…" kolla-ansible 2 minutes ago Up About a minute (healthy) 2026-04-01 00:24:07.210509 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- osism…" api 2 minutes ago Up 2 minutes (healthy) 192.168.16.5:8000->8000/tcp 2026-04-01 00:24:07.210534 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 minutes ago Up About a minute (healthy) 8000/tcp 2026-04-01 00:24:07.210544 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- osism…" beat 2 minutes ago Up 2 minutes (healthy) 2026-04-01 00:24:07.210554 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- osism…" flower 2 minutes ago Up 2 minutes (healthy) 2026-04-01 00:24:07.210564 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20260322.0 "/sbin/tini -- /entr…" inventory_reconciler 2 minutes ago Up About a minute (healthy) 2026-04-01 00:24:07.210574 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- osism…" listener 2 minutes ago Up 2 minutes (healthy) 2026-04-01 00:24:07.210584 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb 2 minutes ago Up 2 minutes (healthy) 3306/tcp 2026-04-01 00:24:07.210594 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- osism…" openstack 2 minutes ago Up 2 minutes (healthy) 2026-04-01 00:24:07.210604 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis 2 minutes ago Up 2 minutes (healthy) 6379/tcp 2026-04-01 00:24:07.210637 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20260322.0 "/entrypoint.sh osis…" osism-ansible 2 minutes ago Up About a minute (healthy) 2026-04-01 00:24:07.210647 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:0.20260320.0 "docker-entrypoint.s…" frontend 2 minutes ago Up 2 minutes 192.168.16.5:3000->3000/tcp 2026-04-01 00:24:07.210657 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20260322.0 "/entrypoint.sh osis…" osism-kubernetes 2 minutes ago Up About a minute (healthy) 2026-04-01 00:24:07.210668 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- sleep…" osismclient 2 minutes ago Up 2 minutes (healthy) 2026-04-01 00:24:07.215293 | orchestrator | ++ semver 10.0.0 7.0.0 2026-04-01 00:24:07.265348 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-01 00:24:07.265470 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-04-01 00:24:07.270255 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-04-01 00:24:19.763599 | orchestrator | 2026-04-01 00:24:19 | INFO  | Prepare task for execution of resolvconf. 2026-04-01 00:24:19.946563 | orchestrator | 2026-04-01 00:24:19 | INFO  | Task 2a91518a-7686-4bb1-bb80-294613d8bbbc (resolvconf) was prepared for execution. 2026-04-01 00:24:19.946708 | orchestrator | 2026-04-01 00:24:19 | INFO  | It takes a moment until task 2a91518a-7686-4bb1-bb80-294613d8bbbc (resolvconf) has been started and output is visible here. 2026-04-01 00:24:33.136215 | orchestrator | 2026-04-01 00:24:33.136358 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-04-01 00:24:33.136386 | orchestrator | 2026-04-01 00:24:33.136527 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-01 00:24:33.136547 | orchestrator | Wednesday 01 April 2026 00:24:23 +0000 (0:00:00.223) 0:00:00.223 ******* 2026-04-01 00:24:33.136566 | orchestrator | ok: [testbed-manager] 2026-04-01 00:24:33.136585 | orchestrator | 2026-04-01 00:24:33.136604 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-04-01 00:24:33.136623 | orchestrator | Wednesday 01 April 2026 00:24:26 +0000 (0:00:03.841) 0:00:04.064 ******* 2026-04-01 00:24:33.136641 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:24:33.136661 | orchestrator | 2026-04-01 00:24:33.136679 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-04-01 00:24:33.136697 | orchestrator | Wednesday 01 April 2026 00:24:26 +0000 (0:00:00.062) 0:00:04.127 ******* 2026-04-01 00:24:33.136717 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-04-01 00:24:33.136738 | orchestrator | 2026-04-01 00:24:33.136758 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-04-01 00:24:33.136777 | orchestrator | Wednesday 01 April 2026 00:24:27 +0000 (0:00:00.079) 0:00:04.207 ******* 2026-04-01 00:24:33.136796 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-04-01 00:24:33.136815 | orchestrator | 2026-04-01 00:24:33.136834 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-04-01 00:24:33.136853 | orchestrator | Wednesday 01 April 2026 00:24:27 +0000 (0:00:00.080) 0:00:04.287 ******* 2026-04-01 00:24:33.136873 | orchestrator | ok: [testbed-manager] 2026-04-01 00:24:33.136892 | orchestrator | 2026-04-01 00:24:33.136911 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-04-01 00:24:33.136930 | orchestrator | Wednesday 01 April 2026 00:24:28 +0000 (0:00:01.170) 0:00:05.458 ******* 2026-04-01 00:24:33.136950 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:24:33.137007 | orchestrator | 2026-04-01 00:24:33.137027 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-04-01 00:24:33.137046 | orchestrator | Wednesday 01 April 2026 00:24:28 +0000 (0:00:00.062) 0:00:05.520 ******* 2026-04-01 00:24:33.137064 | orchestrator | ok: [testbed-manager] 2026-04-01 00:24:33.137085 | orchestrator | 2026-04-01 00:24:33.137103 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-04-01 00:24:33.137121 | orchestrator | Wednesday 01 April 2026 00:24:28 +0000 (0:00:00.542) 0:00:06.063 ******* 2026-04-01 00:24:33.137139 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:24:33.137158 | orchestrator | 2026-04-01 00:24:33.137177 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-04-01 00:24:33.137196 | orchestrator | Wednesday 01 April 2026 00:24:28 +0000 (0:00:00.078) 0:00:06.141 ******* 2026-04-01 00:24:33.137213 | orchestrator | changed: [testbed-manager] 2026-04-01 00:24:33.137232 | orchestrator | 2026-04-01 00:24:33.137249 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-04-01 00:24:33.137268 | orchestrator | Wednesday 01 April 2026 00:24:29 +0000 (0:00:00.623) 0:00:06.764 ******* 2026-04-01 00:24:33.137286 | orchestrator | changed: [testbed-manager] 2026-04-01 00:24:33.137304 | orchestrator | 2026-04-01 00:24:33.137322 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-04-01 00:24:33.137340 | orchestrator | Wednesday 01 April 2026 00:24:30 +0000 (0:00:01.102) 0:00:07.866 ******* 2026-04-01 00:24:33.137358 | orchestrator | ok: [testbed-manager] 2026-04-01 00:24:33.137375 | orchestrator | 2026-04-01 00:24:33.137418 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-04-01 00:24:33.137436 | orchestrator | Wednesday 01 April 2026 00:24:31 +0000 (0:00:01.000) 0:00:08.867 ******* 2026-04-01 00:24:33.137455 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-04-01 00:24:33.137473 | orchestrator | 2026-04-01 00:24:33.137491 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-04-01 00:24:33.137509 | orchestrator | Wednesday 01 April 2026 00:24:31 +0000 (0:00:00.071) 0:00:08.939 ******* 2026-04-01 00:24:33.137526 | orchestrator | changed: [testbed-manager] 2026-04-01 00:24:33.137543 | orchestrator | 2026-04-01 00:24:33.137561 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 00:24:33.137581 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-01 00:24:33.137599 | orchestrator | 2026-04-01 00:24:33.137617 | orchestrator | 2026-04-01 00:24:33.137634 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 00:24:33.137652 | orchestrator | Wednesday 01 April 2026 00:24:32 +0000 (0:00:01.177) 0:00:10.116 ******* 2026-04-01 00:24:33.137670 | orchestrator | =============================================================================== 2026-04-01 00:24:33.137688 | orchestrator | Gathering Facts --------------------------------------------------------- 3.84s 2026-04-01 00:24:33.137706 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.18s 2026-04-01 00:24:33.137724 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.17s 2026-04-01 00:24:33.137742 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.10s 2026-04-01 00:24:33.137785 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 1.00s 2026-04-01 00:24:33.137804 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.62s 2026-04-01 00:24:33.137848 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.54s 2026-04-01 00:24:33.137867 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.08s 2026-04-01 00:24:33.137885 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.08s 2026-04-01 00:24:33.137917 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2026-04-01 00:24:33.137935 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.07s 2026-04-01 00:24:33.137953 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.06s 2026-04-01 00:24:33.137971 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.06s 2026-04-01 00:24:33.302850 | orchestrator | + osism apply sshconfig 2026-04-01 00:24:44.586687 | orchestrator | 2026-04-01 00:24:44 | INFO  | Prepare task for execution of sshconfig. 2026-04-01 00:24:44.658373 | orchestrator | 2026-04-01 00:24:44 | INFO  | Task 005ba56e-ad7a-49b8-945a-6ae4d8195e89 (sshconfig) was prepared for execution. 2026-04-01 00:24:44.658486 | orchestrator | 2026-04-01 00:24:44 | INFO  | It takes a moment until task 005ba56e-ad7a-49b8-945a-6ae4d8195e89 (sshconfig) has been started and output is visible here. 2026-04-01 00:24:54.968362 | orchestrator | 2026-04-01 00:24:54.968547 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-04-01 00:24:54.968568 | orchestrator | 2026-04-01 00:24:54.968581 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-04-01 00:24:54.968593 | orchestrator | Wednesday 01 April 2026 00:24:47 +0000 (0:00:00.142) 0:00:00.142 ******* 2026-04-01 00:24:54.968604 | orchestrator | ok: [testbed-manager] 2026-04-01 00:24:54.968616 | orchestrator | 2026-04-01 00:24:54.968627 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-04-01 00:24:54.968639 | orchestrator | Wednesday 01 April 2026 00:24:48 +0000 (0:00:00.914) 0:00:01.057 ******* 2026-04-01 00:24:54.968650 | orchestrator | changed: [testbed-manager] 2026-04-01 00:24:54.968663 | orchestrator | 2026-04-01 00:24:54.968673 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-04-01 00:24:54.968685 | orchestrator | Wednesday 01 April 2026 00:24:48 +0000 (0:00:00.468) 0:00:01.525 ******* 2026-04-01 00:24:54.968696 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-04-01 00:24:54.968707 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-04-01 00:24:54.968718 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-04-01 00:24:54.968729 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-04-01 00:24:54.968739 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-04-01 00:24:54.968750 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-04-01 00:24:54.968761 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-04-01 00:24:54.968772 | orchestrator | 2026-04-01 00:24:54.968782 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-04-01 00:24:54.968793 | orchestrator | Wednesday 01 April 2026 00:24:54 +0000 (0:00:05.398) 0:00:06.924 ******* 2026-04-01 00:24:54.968804 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:24:54.968815 | orchestrator | 2026-04-01 00:24:54.968826 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-04-01 00:24:54.968836 | orchestrator | Wednesday 01 April 2026 00:24:54 +0000 (0:00:00.110) 0:00:07.034 ******* 2026-04-01 00:24:54.968847 | orchestrator | changed: [testbed-manager] 2026-04-01 00:24:54.968858 | orchestrator | 2026-04-01 00:24:54.968868 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 00:24:54.968883 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-01 00:24:54.968896 | orchestrator | 2026-04-01 00:24:54.968910 | orchestrator | 2026-04-01 00:24:54.968923 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 00:24:54.968935 | orchestrator | Wednesday 01 April 2026 00:24:54 +0000 (0:00:00.498) 0:00:07.533 ******* 2026-04-01 00:24:54.968948 | orchestrator | =============================================================================== 2026-04-01 00:24:54.968960 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.40s 2026-04-01 00:24:54.969002 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.91s 2026-04-01 00:24:54.969015 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.50s 2026-04-01 00:24:54.969028 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.47s 2026-04-01 00:24:54.969040 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.11s 2026-04-01 00:24:55.145240 | orchestrator | + osism apply known-hosts 2026-04-01 00:25:06.495820 | orchestrator | 2026-04-01 00:25:06 | INFO  | Prepare task for execution of known-hosts. 2026-04-01 00:25:06.567553 | orchestrator | 2026-04-01 00:25:06 | INFO  | Task 8e20c8ea-c6dd-4088-8d36-fd2c57cb5327 (known-hosts) was prepared for execution. 2026-04-01 00:25:06.567644 | orchestrator | 2026-04-01 00:25:06 | INFO  | It takes a moment until task 8e20c8ea-c6dd-4088-8d36-fd2c57cb5327 (known-hosts) has been started and output is visible here. 2026-04-01 00:25:21.989204 | orchestrator | 2026-04-01 00:25:21.989306 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-04-01 00:25:21.989321 | orchestrator | 2026-04-01 00:25:21.989332 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-04-01 00:25:21.989344 | orchestrator | Wednesday 01 April 2026 00:25:09 +0000 (0:00:00.190) 0:00:00.190 ******* 2026-04-01 00:25:21.989358 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-04-01 00:25:21.989408 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-04-01 00:25:21.989425 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-04-01 00:25:21.989450 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-04-01 00:25:21.989468 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-04-01 00:25:21.989485 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-04-01 00:25:21.989501 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-04-01 00:25:21.989517 | orchestrator | 2026-04-01 00:25:21.989528 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-04-01 00:25:21.989539 | orchestrator | Wednesday 01 April 2026 00:25:16 +0000 (0:00:06.354) 0:00:06.545 ******* 2026-04-01 00:25:21.989551 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-04-01 00:25:21.989563 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-04-01 00:25:21.989573 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-04-01 00:25:21.989583 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-04-01 00:25:21.989593 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-04-01 00:25:21.989603 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-04-01 00:25:21.989613 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-04-01 00:25:21.989622 | orchestrator | 2026-04-01 00:25:21.989632 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-01 00:25:21.989642 | orchestrator | Wednesday 01 April 2026 00:25:16 +0000 (0:00:00.186) 0:00:06.732 ******* 2026-04-01 00:25:21.989671 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBP3MI2dk1N2/QZjbwGVWz1lZuxzRhnVbfDR66vj7Hl9bOIKRkG12qyF32g0OOW3l0eUaUZgD2oQZjSEONIdmdog=) 2026-04-01 00:25:21.989687 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDdoPez4utFUtEh+Jw8SB9DNH7gRLuxE1PxokdSQMIQFwe2RhOyO+CJcqqmC2ig6c8Ko6HxO74o9/4oyGrqZW7xsoJ2FqzqgwQVjugBsGF8X2leJBsQn+NKGaAtgjA9/KzKIASicxNix074afzi2oU3H7lp6T5Du6O3JppClJyod1Vej9VbAhRs6MeCWSdsx/72gM+3W/dGVwgQoFA4iaF733Tc27z6M1p8v+HdTnR8V223g7yEFcdjuWFraWNHEL9OgeLUEJOpTmvozlZxavZptgc0kzAf0O1QKMYz91aQtP7zAbL1erPkEkOMAbZ1E/MRXOTl6rOyOJt6GQk82CSbOJgAJGjWaTDqz4zfYtB7ys31gS4TwAJDdrQQp5KGeepuDai36Sdo3Wf3ELgeyePX2LE/13KyzcEU8DGa8yJOgiy+BnUtvvsBNif8Vzuf08+8tDC06XYXxyIR1EDU9QPx3oGkn8VJ5FeTrmZPzt+KSHD58wxNArk/xD+FH9ScmXE=) 2026-04-01 00:25:21.989700 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEnuGPIwm77XKeeQOzF6NIkvXaxfJLqeuZURnFkGGtYr) 2026-04-01 00:25:21.989712 | orchestrator | 2026-04-01 00:25:21.989722 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-01 00:25:21.989731 | orchestrator | Wednesday 01 April 2026 00:25:17 +0000 (0:00:01.216) 0:00:07.948 ******* 2026-04-01 00:25:21.989765 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCqXxwsvxuoxaDikImHpEzu9JSdIWIuvhqGJTyCqx9aJaU4To0Ny7ecyQx0mfUANez2fWUtwrBFVSxzJLSSflYhiAqrshGsvTBaZ4DgNcrm+nx9zAhDbGN+FzaQ6YEXBfoVCxC5XRnpmrq+libCPDKd9wNHeoeUsKKt3hQFalgoe5lGCtvUlilPfJnJa/FtuEFA9K7NblL4xx0HurWg78FGiLiCqTuBeBf8vhhSdW9yHn6dlViXimV1aTfkJnDm0cajm/b75ttmwh1iOvBAjGYegPMBV4Bv84xco8m4h95hVZfOLTcb3YMYNhSB7DVdZ0M8Tnu0FEAwto0yo6mq/ZdcmGRTyg50YEYu4wlIQoWm+fp0XNVr5GlIkQc3qxfK6MlGAHNUh2rMfuvBhNaV/sEp49QJENkbTsbe6RUwaMSu1jUP7hkSjhkj7Kz/GK5KhZfMqRvdHDGfw8GWsm6zyJQZ0shhQEpeJaQ6wTScmpTPERd4206CZJEMu2C32PVI1KM=) 2026-04-01 00:25:21.989778 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIHKY0Cst7pJrcK6z/qogrB6FtBDf5zdxeZKTj83ydjEtHe+y4LuhoN/WwPf4d17Xl4TE7M62ZkXOUYUh5WhCCw=) 2026-04-01 00:25:21.989790 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILqTQC6INFd+8k7AzS1gB5FlhBtoIWMMPVYyLH3MzFQ1) 2026-04-01 00:25:21.989801 | orchestrator | 2026-04-01 00:25:21.989812 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-01 00:25:21.989824 | orchestrator | Wednesday 01 April 2026 00:25:18 +0000 (0:00:01.049) 0:00:08.998 ******* 2026-04-01 00:25:21.989836 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBO1F5EqStEXgQjSf0vZiTTR0vdlMQtzBim7FtYvgC3Jdy27yqFbsFnCxWe23GWQzotRNVNzUbz3+/PaivFmS0xU=) 2026-04-01 00:25:21.989904 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDPXZWnuqd838MP0zyMwVA7CZ9u9Zx1tAl2rYycSB7Ceh81ysb1Ch6Xb9BluLhSHCKevQTGd+bVziShJR7II0jGyrZiALosFVPIlRVGWZTxTHgPcakHF6Wipd/8HhaKVx/xX8Gtstj6aJjuXQR21zGjIMApoDCU4nZZSGKOdb3p5+HCfeHM5NAjTZHpKrQxefhKAcVHhjo4r61VZOZGPSbhCj8pRGQYyhMiKsY7rcCgQgN3/W3pMZZos9RIJyqcQpntr6uh33vFY3292/lYA97vVF9pxjpRpuX0ex4plVO+gKeU9h3DwlFqaAJtqgGrFqEib8A+RJZVXpnBt877IPUn1IAhBBXzMl1evVIU9BhMdVlmwxNnL5jalUZLcXi6bm7GFNZ20lu3HWNoUD36Qpd13GBcq9vfqUKuMAZ6kkV87yHSSx5tf3Ie8NCiVno1LrPyeBsqUlpdVYxOlpElWTeu4Rf+zrw+okplbC6uNnGKD1Yw9xkyifJu1oVNIbW8tkM=) 2026-04-01 00:25:21.989917 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMC56v0cAhoFokvLij5aBbI67SxISaFV/O7TgpoiRkyf) 2026-04-01 00:25:21.989929 | orchestrator | 2026-04-01 00:25:21.989941 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-01 00:25:21.989956 | orchestrator | Wednesday 01 April 2026 00:25:19 +0000 (0:00:01.030) 0:00:10.028 ******* 2026-04-01 00:25:21.989967 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCvJ5Tn7oECWa0NGirWitLTuTMN5BXkjUCXTI4r+DQ08NEPAJmDN9IYJ4wtSPQHw9mdoQUOL1Y0H6OLPySU8xCz66lUOIwrBZabtKnqbmD5W8tOgc2rMdX7hm3vvS/PwYEmzPMX7ox1IPKByN3nFOofKLVmi9pNymD9TXrz7Q3znejkssJjqUMUwQq61NxOkOV9uhO98Qhjy7bKQS0r60DIDpiu0m0okvnXB5LZseeUhn6TphHyTxDZKXzcwLe/TCCsSJvM/CpSko6hWRJq43blFUJuuRB3qCJa5ILqzt2Uflc65xcqgchLHOAoyBOHfX75/VjUabUx9Y7W4B1dsjXdZ6e7O+B+UiPZFHlDN5Kce+mi02FfGGKCR8xCuN57SAwSHbZxSA75mEW7bItA2YcP9ECasn87xwThL2pnfTzt6ZFgoD3ICDEGxJBNIHFupB9UwCd9YCXcbMW2vZgbf1QK8t0Z8kdHGNC8lNsd3OCye5+5s4wsCMkCsSIGmHYc4DM=) 2026-04-01 00:25:21.989986 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLl+e0R5DSRjWeWjwBQf7amY3Y1R3UrH+eO9DTliKnl97u7UhlOPa2e75UxepwNJ2B9qnmk6TWMG7Fnh6CwRBag=) 2026-04-01 00:25:21.989998 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKRsnxvmZLkr2j7kGeQu77lBrShZHEdIFr4L/QBngL9U) 2026-04-01 00:25:21.990009 | orchestrator | 2026-04-01 00:25:21.990078 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-01 00:25:21.990091 | orchestrator | Wednesday 01 April 2026 00:25:20 +0000 (0:00:01.034) 0:00:11.062 ******* 2026-04-01 00:25:21.990102 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCv+8jp7ScGGRwx1FBZbSQhY+XEKm2TCF5JtMYgyaxqAT552mW4Xu37CoEuwBkvV6zTLXiUkokDLIi3FcVKDj/8AadSpKVNCRjNuTu76DOGkTQo6lhSii4XYzyNyeqDglgBcXRHRS5PSO1XxuKoR5i8dvwbFBvB2baoHWHrODrqz5qO5PmNosop8Rid/lgeY7fNUMB1pubU9ueW5k3Ph7Kk78/O9pT/r9bfXC5swAkg1p2K2G2lBZShoQeWKOhL+3byicKv0uST+Vk59by+5fJotsEY2tNg/QvKmNQl3ftwpKRR/022S3bu+4g5ZOPtz2bdGxx7QGwFtOCyYooloIpfH6G5d1THc5MLHGjDsoltHxnTtTx9uwIpPkCCdrf/+K+0nngIMbCPgqfloqzthd8st2qw9RoevZyXJnAhmBUwk2fpkGBOglJesuCGp908BQY/HEJi64GxEm2FwzDln/ejfeaaV7EM8fbgqxP64RYdsbizAmY4XbiySV25xHo5PqU=) 2026-04-01 00:25:21.990114 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBaSA6GRyNj/kEgywkLZATo9vZ93d0b7kuQlONZgdGZ1) 2026-04-01 00:25:21.990123 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPcoABsAbfwngbDjoPrGL4J78eFdR8jOx1TcAyAcSzWqedQ9f12VyvxC689S41fA0HqP4fY5n5HRsSH+2ouwXQs=) 2026-04-01 00:25:21.990133 | orchestrator | 2026-04-01 00:25:21.990143 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-01 00:25:21.990152 | orchestrator | Wednesday 01 April 2026 00:25:21 +0000 (0:00:01.053) 0:00:12.116 ******* 2026-04-01 00:25:21.990170 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOHDOO8W9EeqHessKZhwYav9yOL5scT8hP23eU09lgTlbdnhrAzNFgT/39PcRtQkQ4BD6l0CSchh0ctowCKQv4o=) 2026-04-01 00:25:32.914839 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDf1Xzgo1oBauptCv50MFCc5FMKkRosZVM+Fz1SBNLPlYpfxOpergMBEadPamad26a+PspZ0yx7GhHiD227NXqAJDGuvG/t1teZDYArZ/MOJm6h4/vTwATffjbFTzBJKo2abPQDheUSc4gLkcZzbvhGFyLWHXFKGzJYJVCA4pKQWhNm9Rmhd7Aj+WexgnB3mK2+FNTt9J32/+0PmwoEb+FnJ/cMiFLSEbw05M5K3GWv8hEBRX/42ePW9dHwJgpFHHjSED5/fyfs7FC6CMFIOq+dD+NF4UKb4RpNwBX4QCydqt8G638ZdHTY5laiShfUXyvXldXL4zgbLaB956KMiUAd1B4nKagMZj8j0tZEAoRpNmhwKqUJuu27OvLVtwgCDCuQDJrBWyW3EI4+hweDjg8vh/jQpL+Nlj3vKLy/MIXqorkQgWG1gFxHwpAh2x3tZ9cnBnT4aREMD/1qH6i39p5X2Gfp5M4y5wDQiApSHHywcqLV7h3KHDoVGpspMFJvhpk=) 2026-04-01 00:25:32.914991 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICHpCO45hM63Aj5fXr4C7m/eVCYatLZfnRggTtsKT1Hb) 2026-04-01 00:25:32.915017 | orchestrator | 2026-04-01 00:25:32.915031 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-01 00:25:32.915044 | orchestrator | Wednesday 01 April 2026 00:25:22 +0000 (0:00:01.041) 0:00:13.157 ******* 2026-04-01 00:25:32.915055 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIONDfUWMVNXjdF020qx3zBlyiQpJMFeq6K+DNDeTUgzN) 2026-04-01 00:25:32.915068 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCx2eLPvKDoVwaBZRsLljCwlE02on8gGtLR7OsIw40k/QLBlKsVh9304y1FDZbXkIX7Ylfvz1fbTcuNrr9g0Ll2Dt4JvUetUTKuPXoN2LiU31+FPXKoTlA3OeUpN7slzn70i07kUxe1LImJXcqO2Pyb7IpR6ajteIR/AmAyTDA40UAycv9vvGws0GbZlaLBuLcUBzFY3N1QVUGub8lDnP81EMgGA6oSeKeXHiZGNFLvKbC5gXd2vqmmOT7NCYLDre3Inzr8xbaljiCijVGEbDOxXgOE+O8SHqzIDnZOjNXFSZQs0f1WyDpPI9rSxsxYOGzL6tYTTXdgM0INDP6yNwnj7ot3TNemYdzy7VUbEJ2eh1WV3ygvtJwAs27CZ47G2oXIFG+la1WkhZvRib/pXyrd9n88peCT4QopnDRQxvd4c8SQKN/HHfXoLx34NPbBSmg0eMSHjFNJdqqiuaY1aVw97JoiII7O09GVmp9dZy9nOtax3LnT4YVRlHJAK6zxBgM=) 2026-04-01 00:25:32.915107 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNRsEp2iEa+ueNX0kzjM/Bhm5t0CIlimedm6oC3c19PT4rCyYpxPl7cwCCU+o79YUQsVyuK20lTs9g44WiYwBC8=) 2026-04-01 00:25:32.915120 | orchestrator | 2026-04-01 00:25:32.915132 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-04-01 00:25:32.915144 | orchestrator | Wednesday 01 April 2026 00:25:23 +0000 (0:00:01.076) 0:00:14.234 ******* 2026-04-01 00:25:32.915155 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-04-01 00:25:32.915166 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-04-01 00:25:32.915177 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-04-01 00:25:32.915187 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-04-01 00:25:32.915198 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-04-01 00:25:32.915209 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-04-01 00:25:32.915220 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-04-01 00:25:32.915231 | orchestrator | 2026-04-01 00:25:32.915241 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-04-01 00:25:32.915254 | orchestrator | Wednesday 01 April 2026 00:25:28 +0000 (0:00:05.194) 0:00:19.429 ******* 2026-04-01 00:25:32.915283 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-04-01 00:25:32.915296 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-04-01 00:25:32.915307 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-04-01 00:25:32.915318 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-04-01 00:25:32.915329 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-04-01 00:25:32.915340 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-04-01 00:25:32.915354 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-04-01 00:25:32.915419 | orchestrator | 2026-04-01 00:25:32.915461 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-01 00:25:32.915481 | orchestrator | Wednesday 01 April 2026 00:25:29 +0000 (0:00:00.166) 0:00:19.595 ******* 2026-04-01 00:25:32.915500 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBP3MI2dk1N2/QZjbwGVWz1lZuxzRhnVbfDR66vj7Hl9bOIKRkG12qyF32g0OOW3l0eUaUZgD2oQZjSEONIdmdog=) 2026-04-01 00:25:32.915520 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDdoPez4utFUtEh+Jw8SB9DNH7gRLuxE1PxokdSQMIQFwe2RhOyO+CJcqqmC2ig6c8Ko6HxO74o9/4oyGrqZW7xsoJ2FqzqgwQVjugBsGF8X2leJBsQn+NKGaAtgjA9/KzKIASicxNix074afzi2oU3H7lp6T5Du6O3JppClJyod1Vej9VbAhRs6MeCWSdsx/72gM+3W/dGVwgQoFA4iaF733Tc27z6M1p8v+HdTnR8V223g7yEFcdjuWFraWNHEL9OgeLUEJOpTmvozlZxavZptgc0kzAf0O1QKMYz91aQtP7zAbL1erPkEkOMAbZ1E/MRXOTl6rOyOJt6GQk82CSbOJgAJGjWaTDqz4zfYtB7ys31gS4TwAJDdrQQp5KGeepuDai36Sdo3Wf3ELgeyePX2LE/13KyzcEU8DGa8yJOgiy+BnUtvvsBNif8Vzuf08+8tDC06XYXxyIR1EDU9QPx3oGkn8VJ5FeTrmZPzt+KSHD58wxNArk/xD+FH9ScmXE=) 2026-04-01 00:25:32.915556 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEnuGPIwm77XKeeQOzF6NIkvXaxfJLqeuZURnFkGGtYr) 2026-04-01 00:25:32.915574 | orchestrator | 2026-04-01 00:25:32.915591 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-01 00:25:32.915609 | orchestrator | Wednesday 01 April 2026 00:25:30 +0000 (0:00:01.043) 0:00:20.639 ******* 2026-04-01 00:25:32.915628 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIHKY0Cst7pJrcK6z/qogrB6FtBDf5zdxeZKTj83ydjEtHe+y4LuhoN/WwPf4d17Xl4TE7M62ZkXOUYUh5WhCCw=) 2026-04-01 00:25:32.915646 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILqTQC6INFd+8k7AzS1gB5FlhBtoIWMMPVYyLH3MzFQ1) 2026-04-01 00:25:32.915665 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCqXxwsvxuoxaDikImHpEzu9JSdIWIuvhqGJTyCqx9aJaU4To0Ny7ecyQx0mfUANez2fWUtwrBFVSxzJLSSflYhiAqrshGsvTBaZ4DgNcrm+nx9zAhDbGN+FzaQ6YEXBfoVCxC5XRnpmrq+libCPDKd9wNHeoeUsKKt3hQFalgoe5lGCtvUlilPfJnJa/FtuEFA9K7NblL4xx0HurWg78FGiLiCqTuBeBf8vhhSdW9yHn6dlViXimV1aTfkJnDm0cajm/b75ttmwh1iOvBAjGYegPMBV4Bv84xco8m4h95hVZfOLTcb3YMYNhSB7DVdZ0M8Tnu0FEAwto0yo6mq/ZdcmGRTyg50YEYu4wlIQoWm+fp0XNVr5GlIkQc3qxfK6MlGAHNUh2rMfuvBhNaV/sEp49QJENkbTsbe6RUwaMSu1jUP7hkSjhkj7Kz/GK5KhZfMqRvdHDGfw8GWsm6zyJQZ0shhQEpeJaQ6wTScmpTPERd4206CZJEMu2C32PVI1KM=) 2026-04-01 00:25:32.915684 | orchestrator | 2026-04-01 00:25:32.915703 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-01 00:25:32.915723 | orchestrator | Wednesday 01 April 2026 00:25:31 +0000 (0:00:01.045) 0:00:21.684 ******* 2026-04-01 00:25:32.915742 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDPXZWnuqd838MP0zyMwVA7CZ9u9Zx1tAl2rYycSB7Ceh81ysb1Ch6Xb9BluLhSHCKevQTGd+bVziShJR7II0jGyrZiALosFVPIlRVGWZTxTHgPcakHF6Wipd/8HhaKVx/xX8Gtstj6aJjuXQR21zGjIMApoDCU4nZZSGKOdb3p5+HCfeHM5NAjTZHpKrQxefhKAcVHhjo4r61VZOZGPSbhCj8pRGQYyhMiKsY7rcCgQgN3/W3pMZZos9RIJyqcQpntr6uh33vFY3292/lYA97vVF9pxjpRpuX0ex4plVO+gKeU9h3DwlFqaAJtqgGrFqEib8A+RJZVXpnBt877IPUn1IAhBBXzMl1evVIU9BhMdVlmwxNnL5jalUZLcXi6bm7GFNZ20lu3HWNoUD36Qpd13GBcq9vfqUKuMAZ6kkV87yHSSx5tf3Ie8NCiVno1LrPyeBsqUlpdVYxOlpElWTeu4Rf+zrw+okplbC6uNnGKD1Yw9xkyifJu1oVNIbW8tkM=) 2026-04-01 00:25:32.915763 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBO1F5EqStEXgQjSf0vZiTTR0vdlMQtzBim7FtYvgC3Jdy27yqFbsFnCxWe23GWQzotRNVNzUbz3+/PaivFmS0xU=) 2026-04-01 00:25:32.915781 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMC56v0cAhoFokvLij5aBbI67SxISaFV/O7TgpoiRkyf) 2026-04-01 00:25:32.915800 | orchestrator | 2026-04-01 00:25:32.915812 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-01 00:25:32.915824 | orchestrator | Wednesday 01 April 2026 00:25:32 +0000 (0:00:01.055) 0:00:22.739 ******* 2026-04-01 00:25:32.915834 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKRsnxvmZLkr2j7kGeQu77lBrShZHEdIFr4L/QBngL9U) 2026-04-01 00:25:32.915865 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCvJ5Tn7oECWa0NGirWitLTuTMN5BXkjUCXTI4r+DQ08NEPAJmDN9IYJ4wtSPQHw9mdoQUOL1Y0H6OLPySU8xCz66lUOIwrBZabtKnqbmD5W8tOgc2rMdX7hm3vvS/PwYEmzPMX7ox1IPKByN3nFOofKLVmi9pNymD9TXrz7Q3znejkssJjqUMUwQq61NxOkOV9uhO98Qhjy7bKQS0r60DIDpiu0m0okvnXB5LZseeUhn6TphHyTxDZKXzcwLe/TCCsSJvM/CpSko6hWRJq43blFUJuuRB3qCJa5ILqzt2Uflc65xcqgchLHOAoyBOHfX75/VjUabUx9Y7W4B1dsjXdZ6e7O+B+UiPZFHlDN5Kce+mi02FfGGKCR8xCuN57SAwSHbZxSA75mEW7bItA2YcP9ECasn87xwThL2pnfTzt6ZFgoD3ICDEGxJBNIHFupB9UwCd9YCXcbMW2vZgbf1QK8t0Z8kdHGNC8lNsd3OCye5+5s4wsCMkCsSIGmHYc4DM=) 2026-04-01 00:25:37.466717 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLl+e0R5DSRjWeWjwBQf7amY3Y1R3UrH+eO9DTliKnl97u7UhlOPa2e75UxepwNJ2B9qnmk6TWMG7Fnh6CwRBag=) 2026-04-01 00:25:37.466796 | orchestrator | 2026-04-01 00:25:37.466807 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-01 00:25:37.466816 | orchestrator | Wednesday 01 April 2026 00:25:33 +0000 (0:00:01.047) 0:00:23.787 ******* 2026-04-01 00:25:37.466840 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCv+8jp7ScGGRwx1FBZbSQhY+XEKm2TCF5JtMYgyaxqAT552mW4Xu37CoEuwBkvV6zTLXiUkokDLIi3FcVKDj/8AadSpKVNCRjNuTu76DOGkTQo6lhSii4XYzyNyeqDglgBcXRHRS5PSO1XxuKoR5i8dvwbFBvB2baoHWHrODrqz5qO5PmNosop8Rid/lgeY7fNUMB1pubU9ueW5k3Ph7Kk78/O9pT/r9bfXC5swAkg1p2K2G2lBZShoQeWKOhL+3byicKv0uST+Vk59by+5fJotsEY2tNg/QvKmNQl3ftwpKRR/022S3bu+4g5ZOPtz2bdGxx7QGwFtOCyYooloIpfH6G5d1THc5MLHGjDsoltHxnTtTx9uwIpPkCCdrf/+K+0nngIMbCPgqfloqzthd8st2qw9RoevZyXJnAhmBUwk2fpkGBOglJesuCGp908BQY/HEJi64GxEm2FwzDln/ejfeaaV7EM8fbgqxP64RYdsbizAmY4XbiySV25xHo5PqU=) 2026-04-01 00:25:37.466850 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPcoABsAbfwngbDjoPrGL4J78eFdR8jOx1TcAyAcSzWqedQ9f12VyvxC689S41fA0HqP4fY5n5HRsSH+2ouwXQs=) 2026-04-01 00:25:37.466861 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBaSA6GRyNj/kEgywkLZATo9vZ93d0b7kuQlONZgdGZ1) 2026-04-01 00:25:37.466868 | orchestrator | 2026-04-01 00:25:37.466875 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-01 00:25:37.466881 | orchestrator | Wednesday 01 April 2026 00:25:34 +0000 (0:00:01.088) 0:00:24.875 ******* 2026-04-01 00:25:37.466888 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICHpCO45hM63Aj5fXr4C7m/eVCYatLZfnRggTtsKT1Hb) 2026-04-01 00:25:37.466894 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDf1Xzgo1oBauptCv50MFCc5FMKkRosZVM+Fz1SBNLPlYpfxOpergMBEadPamad26a+PspZ0yx7GhHiD227NXqAJDGuvG/t1teZDYArZ/MOJm6h4/vTwATffjbFTzBJKo2abPQDheUSc4gLkcZzbvhGFyLWHXFKGzJYJVCA4pKQWhNm9Rmhd7Aj+WexgnB3mK2+FNTt9J32/+0PmwoEb+FnJ/cMiFLSEbw05M5K3GWv8hEBRX/42ePW9dHwJgpFHHjSED5/fyfs7FC6CMFIOq+dD+NF4UKb4RpNwBX4QCydqt8G638ZdHTY5laiShfUXyvXldXL4zgbLaB956KMiUAd1B4nKagMZj8j0tZEAoRpNmhwKqUJuu27OvLVtwgCDCuQDJrBWyW3EI4+hweDjg8vh/jQpL+Nlj3vKLy/MIXqorkQgWG1gFxHwpAh2x3tZ9cnBnT4aREMD/1qH6i39p5X2Gfp5M4y5wDQiApSHHywcqLV7h3KHDoVGpspMFJvhpk=) 2026-04-01 00:25:37.466901 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOHDOO8W9EeqHessKZhwYav9yOL5scT8hP23eU09lgTlbdnhrAzNFgT/39PcRtQkQ4BD6l0CSchh0ctowCKQv4o=) 2026-04-01 00:25:37.466908 | orchestrator | 2026-04-01 00:25:37.466914 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-01 00:25:37.466920 | orchestrator | Wednesday 01 April 2026 00:25:35 +0000 (0:00:01.040) 0:00:25.916 ******* 2026-04-01 00:25:37.466927 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIONDfUWMVNXjdF020qx3zBlyiQpJMFeq6K+DNDeTUgzN) 2026-04-01 00:25:37.466933 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCx2eLPvKDoVwaBZRsLljCwlE02on8gGtLR7OsIw40k/QLBlKsVh9304y1FDZbXkIX7Ylfvz1fbTcuNrr9g0Ll2Dt4JvUetUTKuPXoN2LiU31+FPXKoTlA3OeUpN7slzn70i07kUxe1LImJXcqO2Pyb7IpR6ajteIR/AmAyTDA40UAycv9vvGws0GbZlaLBuLcUBzFY3N1QVUGub8lDnP81EMgGA6oSeKeXHiZGNFLvKbC5gXd2vqmmOT7NCYLDre3Inzr8xbaljiCijVGEbDOxXgOE+O8SHqzIDnZOjNXFSZQs0f1WyDpPI9rSxsxYOGzL6tYTTXdgM0INDP6yNwnj7ot3TNemYdzy7VUbEJ2eh1WV3ygvtJwAs27CZ47G2oXIFG+la1WkhZvRib/pXyrd9n88peCT4QopnDRQxvd4c8SQKN/HHfXoLx34NPbBSmg0eMSHjFNJdqqiuaY1aVw97JoiII7O09GVmp9dZy9nOtax3LnT4YVRlHJAK6zxBgM=) 2026-04-01 00:25:37.466958 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNRsEp2iEa+ueNX0kzjM/Bhm5t0CIlimedm6oC3c19PT4rCyYpxPl7cwCCU+o79YUQsVyuK20lTs9g44WiYwBC8=) 2026-04-01 00:25:37.466965 | orchestrator | 2026-04-01 00:25:37.466971 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-04-01 00:25:37.466977 | orchestrator | Wednesday 01 April 2026 00:25:36 +0000 (0:00:01.085) 0:00:27.002 ******* 2026-04-01 00:25:37.466984 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-04-01 00:25:37.466991 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-04-01 00:25:37.466997 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-04-01 00:25:37.467003 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-04-01 00:25:37.467021 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-04-01 00:25:37.467027 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-04-01 00:25:37.467034 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-04-01 00:25:37.467040 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:25:37.467047 | orchestrator | 2026-04-01 00:25:37.467053 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-04-01 00:25:37.467059 | orchestrator | Wednesday 01 April 2026 00:25:36 +0000 (0:00:00.180) 0:00:27.182 ******* 2026-04-01 00:25:37.467066 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:25:37.467072 | orchestrator | 2026-04-01 00:25:37.467078 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-04-01 00:25:37.467084 | orchestrator | Wednesday 01 April 2026 00:25:36 +0000 (0:00:00.063) 0:00:27.245 ******* 2026-04-01 00:25:37.467090 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:25:37.467097 | orchestrator | 2026-04-01 00:25:37.467103 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-04-01 00:25:37.467109 | orchestrator | Wednesday 01 April 2026 00:25:36 +0000 (0:00:00.056) 0:00:27.301 ******* 2026-04-01 00:25:37.467115 | orchestrator | changed: [testbed-manager] 2026-04-01 00:25:37.467121 | orchestrator | 2026-04-01 00:25:37.467127 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 00:25:37.467134 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-01 00:25:37.467141 | orchestrator | 2026-04-01 00:25:37.467147 | orchestrator | 2026-04-01 00:25:37.467153 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 00:25:37.467159 | orchestrator | Wednesday 01 April 2026 00:25:37 +0000 (0:00:00.485) 0:00:27.787 ******* 2026-04-01 00:25:37.467166 | orchestrator | =============================================================================== 2026-04-01 00:25:37.467172 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.35s 2026-04-01 00:25:37.467178 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.19s 2026-04-01 00:25:37.467185 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.22s 2026-04-01 00:25:37.467191 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2026-04-01 00:25:37.467197 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2026-04-01 00:25:37.467203 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2026-04-01 00:25:37.467209 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2026-04-01 00:25:37.467215 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2026-04-01 00:25:37.467221 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2026-04-01 00:25:37.467232 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2026-04-01 00:25:37.467239 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2026-04-01 00:25:37.467245 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2026-04-01 00:25:37.467251 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2026-04-01 00:25:37.467257 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2026-04-01 00:25:37.467263 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2026-04-01 00:25:37.467270 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2026-04-01 00:25:37.467276 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.49s 2026-04-01 00:25:37.467282 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.19s 2026-04-01 00:25:37.467289 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.18s 2026-04-01 00:25:37.467295 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.17s 2026-04-01 00:25:37.649314 | orchestrator | + osism apply squid 2026-04-01 00:25:49.014276 | orchestrator | 2026-04-01 00:25:49 | INFO  | Prepare task for execution of squid. 2026-04-01 00:25:49.090115 | orchestrator | 2026-04-01 00:25:49 | INFO  | Task 7c430f8f-bce3-4811-83f3-4035e47a4426 (squid) was prepared for execution. 2026-04-01 00:25:49.090209 | orchestrator | 2026-04-01 00:25:49 | INFO  | It takes a moment until task 7c430f8f-bce3-4811-83f3-4035e47a4426 (squid) has been started and output is visible here. 2026-04-01 00:27:50.040529 | orchestrator | 2026-04-01 00:27:50.040631 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-04-01 00:27:50.040644 | orchestrator | 2026-04-01 00:27:50.040651 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-04-01 00:27:50.040658 | orchestrator | Wednesday 01 April 2026 00:25:52 +0000 (0:00:00.192) 0:00:00.192 ******* 2026-04-01 00:27:50.040666 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-04-01 00:27:50.040673 | orchestrator | 2026-04-01 00:27:50.040680 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-04-01 00:27:50.040685 | orchestrator | Wednesday 01 April 2026 00:25:52 +0000 (0:00:00.078) 0:00:00.270 ******* 2026-04-01 00:27:50.040689 | orchestrator | ok: [testbed-manager] 2026-04-01 00:27:50.040695 | orchestrator | 2026-04-01 00:27:50.040713 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-04-01 00:27:50.040717 | orchestrator | Wednesday 01 April 2026 00:25:54 +0000 (0:00:02.410) 0:00:02.681 ******* 2026-04-01 00:27:50.040722 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-04-01 00:27:50.040727 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-04-01 00:27:50.040732 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-04-01 00:27:50.040735 | orchestrator | 2026-04-01 00:27:50.040739 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-04-01 00:27:50.040744 | orchestrator | Wednesday 01 April 2026 00:25:55 +0000 (0:00:01.214) 0:00:03.896 ******* 2026-04-01 00:27:50.040748 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-04-01 00:27:50.040752 | orchestrator | 2026-04-01 00:27:50.040756 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-04-01 00:27:50.040760 | orchestrator | Wednesday 01 April 2026 00:25:56 +0000 (0:00:01.078) 0:00:04.974 ******* 2026-04-01 00:27:50.040766 | orchestrator | ok: [testbed-manager] 2026-04-01 00:27:50.040772 | orchestrator | 2026-04-01 00:27:50.040787 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-04-01 00:27:50.040796 | orchestrator | Wednesday 01 April 2026 00:25:57 +0000 (0:00:00.353) 0:00:05.328 ******* 2026-04-01 00:27:50.040825 | orchestrator | changed: [testbed-manager] 2026-04-01 00:27:50.040834 | orchestrator | 2026-04-01 00:27:50.040841 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-04-01 00:27:50.040847 | orchestrator | Wednesday 01 April 2026 00:25:58 +0000 (0:00:00.912) 0:00:06.240 ******* 2026-04-01 00:27:50.040853 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-04-01 00:27:50.040860 | orchestrator | ok: [testbed-manager] 2026-04-01 00:27:50.040865 | orchestrator | 2026-04-01 00:27:50.040871 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-04-01 00:27:50.040878 | orchestrator | Wednesday 01 April 2026 00:26:33 +0000 (0:00:35.014) 0:00:41.255 ******* 2026-04-01 00:27:50.040884 | orchestrator | changed: [testbed-manager] 2026-04-01 00:27:50.040889 | orchestrator | 2026-04-01 00:27:50.040900 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-04-01 00:27:50.040905 | orchestrator | Wednesday 01 April 2026 00:26:49 +0000 (0:00:15.845) 0:00:57.100 ******* 2026-04-01 00:27:50.040911 | orchestrator | Pausing for 60 seconds 2026-04-01 00:27:50.040917 | orchestrator | changed: [testbed-manager] 2026-04-01 00:27:50.040922 | orchestrator | 2026-04-01 00:27:50.040928 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-04-01 00:27:50.040934 | orchestrator | Wednesday 01 April 2026 00:27:49 +0000 (0:01:00.083) 0:01:57.184 ******* 2026-04-01 00:27:50.040939 | orchestrator | ok: [testbed-manager] 2026-04-01 00:27:50.040946 | orchestrator | 2026-04-01 00:27:50.040951 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-04-01 00:27:50.040957 | orchestrator | Wednesday 01 April 2026 00:27:49 +0000 (0:00:00.073) 0:01:57.258 ******* 2026-04-01 00:27:50.040963 | orchestrator | changed: [testbed-manager] 2026-04-01 00:27:50.040970 | orchestrator | 2026-04-01 00:27:50.040975 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 00:27:50.040981 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 00:27:50.040987 | orchestrator | 2026-04-01 00:27:50.040994 | orchestrator | 2026-04-01 00:27:50.041000 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 00:27:50.041005 | orchestrator | Wednesday 01 April 2026 00:27:49 +0000 (0:00:00.593) 0:01:57.851 ******* 2026-04-01 00:27:50.041011 | orchestrator | =============================================================================== 2026-04-01 00:27:50.041017 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.08s 2026-04-01 00:27:50.041023 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 35.01s 2026-04-01 00:27:50.041028 | orchestrator | osism.services.squid : Restart squid service --------------------------- 15.85s 2026-04-01 00:27:50.041034 | orchestrator | osism.services.squid : Install required packages ------------------------ 2.41s 2026-04-01 00:27:50.041040 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.21s 2026-04-01 00:27:50.041045 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.08s 2026-04-01 00:27:50.041051 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.91s 2026-04-01 00:27:50.041057 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.59s 2026-04-01 00:27:50.041063 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.35s 2026-04-01 00:27:50.041069 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.08s 2026-04-01 00:27:50.041076 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.07s 2026-04-01 00:27:50.210860 | orchestrator | + [[ 10.0.0 != \l\a\t\e\s\t ]] 2026-04-01 00:27:50.210943 | orchestrator | ++ semver 10.0.0 10.0.0-0 2026-04-01 00:27:50.285023 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-01 00:27:50.285088 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release/ 2026-04-01 00:27:50.291418 | orchestrator | + set -e 2026-04-01 00:27:50.291508 | orchestrator | + NAMESPACE=kolla/release/ 2026-04-01 00:27:50.291547 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release/#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-04-01 00:27:50.298756 | orchestrator | ++ semver 10.0.0 9.0.0 2026-04-01 00:27:50.355993 | orchestrator | + [[ 1 -lt 0 ]] 2026-04-01 00:27:50.356757 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-04-01 00:28:01.704042 | orchestrator | 2026-04-01 00:28:01 | INFO  | Prepare task for execution of operator. 2026-04-01 00:28:01.785054 | orchestrator | 2026-04-01 00:28:01 | INFO  | Task 270483db-5e5c-42a2-af39-7d9847daaa67 (operator) was prepared for execution. 2026-04-01 00:28:01.785148 | orchestrator | 2026-04-01 00:28:01 | INFO  | It takes a moment until task 270483db-5e5c-42a2-af39-7d9847daaa67 (operator) has been started and output is visible here. 2026-04-01 00:28:17.468027 | orchestrator | 2026-04-01 00:28:17.468154 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-04-01 00:28:17.468172 | orchestrator | 2026-04-01 00:28:17.468186 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-01 00:28:17.468200 | orchestrator | Wednesday 01 April 2026 00:28:04 +0000 (0:00:00.180) 0:00:00.180 ******* 2026-04-01 00:28:17.468213 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:28:17.468226 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:28:17.468238 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:28:17.468335 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:28:17.468351 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:28:17.468364 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:28:17.468376 | orchestrator | 2026-04-01 00:28:17.468390 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-04-01 00:28:17.468402 | orchestrator | Wednesday 01 April 2026 00:28:08 +0000 (0:00:03.377) 0:00:03.557 ******* 2026-04-01 00:28:17.468415 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:28:17.468428 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:28:17.468440 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:28:17.468453 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:28:17.468466 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:28:17.468480 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:28:17.468494 | orchestrator | 2026-04-01 00:28:17.468508 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-04-01 00:28:17.468522 | orchestrator | 2026-04-01 00:28:17.468536 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-04-01 00:28:17.468552 | orchestrator | Wednesday 01 April 2026 00:28:09 +0000 (0:00:00.892) 0:00:04.450 ******* 2026-04-01 00:28:17.468569 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:28:17.468585 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:28:17.468601 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:28:17.468617 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:28:17.468634 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:28:17.468651 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:28:17.468667 | orchestrator | 2026-04-01 00:28:17.468682 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-04-01 00:28:17.468697 | orchestrator | Wednesday 01 April 2026 00:28:09 +0000 (0:00:00.171) 0:00:04.621 ******* 2026-04-01 00:28:17.468713 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:28:17.468730 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:28:17.468746 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:28:17.468762 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:28:17.468778 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:28:17.468794 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:28:17.468810 | orchestrator | 2026-04-01 00:28:17.468826 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-04-01 00:28:17.468842 | orchestrator | Wednesday 01 April 2026 00:28:09 +0000 (0:00:00.154) 0:00:04.776 ******* 2026-04-01 00:28:17.468859 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:28:17.468875 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:28:17.468891 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:28:17.468908 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:28:17.468952 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:28:17.468967 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:28:17.468981 | orchestrator | 2026-04-01 00:28:17.468997 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-04-01 00:28:17.469011 | orchestrator | Wednesday 01 April 2026 00:28:10 +0000 (0:00:00.829) 0:00:05.605 ******* 2026-04-01 00:28:17.469024 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:28:17.469037 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:28:17.469050 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:28:17.469063 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:28:17.469076 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:28:17.469089 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:28:17.469102 | orchestrator | 2026-04-01 00:28:17.469114 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-04-01 00:28:17.469128 | orchestrator | Wednesday 01 April 2026 00:28:11 +0000 (0:00:00.955) 0:00:06.561 ******* 2026-04-01 00:28:17.469141 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-04-01 00:28:17.469155 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-04-01 00:28:17.469167 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-04-01 00:28:17.469181 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-04-01 00:28:17.469194 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-04-01 00:28:17.469206 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-04-01 00:28:17.469219 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-04-01 00:28:17.469231 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-04-01 00:28:17.469245 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-04-01 00:28:17.469283 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-04-01 00:28:17.469298 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-04-01 00:28:17.469311 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-04-01 00:28:17.469325 | orchestrator | 2026-04-01 00:28:17.469339 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-04-01 00:28:17.469351 | orchestrator | Wednesday 01 April 2026 00:28:12 +0000 (0:00:01.253) 0:00:07.815 ******* 2026-04-01 00:28:17.469363 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:28:17.469376 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:28:17.469389 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:28:17.469402 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:28:17.469416 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:28:17.469429 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:28:17.469441 | orchestrator | 2026-04-01 00:28:17.469455 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-04-01 00:28:17.469470 | orchestrator | Wednesday 01 April 2026 00:28:13 +0000 (0:00:01.375) 0:00:09.191 ******* 2026-04-01 00:28:17.469485 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-04-01 00:28:17.469522 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-04-01 00:28:17.469531 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-04-01 00:28:17.469539 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-04-01 00:28:17.469547 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-04-01 00:28:17.469578 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-04-01 00:28:17.469586 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-04-01 00:28:17.469595 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-04-01 00:28:17.469603 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-04-01 00:28:17.469614 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-04-01 00:28:17.469625 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-04-01 00:28:17.469639 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-04-01 00:28:17.469652 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-04-01 00:28:17.469679 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-04-01 00:28:17.469692 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-04-01 00:28:17.469706 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-04-01 00:28:17.469720 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-04-01 00:28:17.469733 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-04-01 00:28:17.469746 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-04-01 00:28:17.469759 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-04-01 00:28:17.469773 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-04-01 00:28:17.469785 | orchestrator | 2026-04-01 00:28:17.469798 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-04-01 00:28:17.469812 | orchestrator | Wednesday 01 April 2026 00:28:15 +0000 (0:00:01.443) 0:00:10.634 ******* 2026-04-01 00:28:17.469833 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:28:17.469846 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:28:17.469857 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:28:17.469868 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:28:17.469875 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:28:17.469882 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:28:17.469888 | orchestrator | 2026-04-01 00:28:17.469895 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-04-01 00:28:17.469902 | orchestrator | Wednesday 01 April 2026 00:28:15 +0000 (0:00:00.147) 0:00:10.782 ******* 2026-04-01 00:28:17.469909 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:28:17.469915 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:28:17.469922 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:28:17.469928 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:28:17.469935 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:28:17.469942 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:28:17.469948 | orchestrator | 2026-04-01 00:28:17.469967 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-04-01 00:28:17.469978 | orchestrator | Wednesday 01 April 2026 00:28:15 +0000 (0:00:00.157) 0:00:10.939 ******* 2026-04-01 00:28:17.469988 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:28:17.469999 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:28:17.470010 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:28:17.470079 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:28:17.470087 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:28:17.470094 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:28:17.470101 | orchestrator | 2026-04-01 00:28:17.470107 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-04-01 00:28:17.470114 | orchestrator | Wednesday 01 April 2026 00:28:16 +0000 (0:00:00.536) 0:00:11.475 ******* 2026-04-01 00:28:17.470121 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:28:17.470127 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:28:17.470134 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:28:17.470142 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:28:17.470153 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:28:17.470165 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:28:17.470176 | orchestrator | 2026-04-01 00:28:17.470187 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-04-01 00:28:17.470198 | orchestrator | Wednesday 01 April 2026 00:28:16 +0000 (0:00:00.175) 0:00:11.651 ******* 2026-04-01 00:28:17.470210 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-01 00:28:17.470219 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:28:17.470226 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-01 00:28:17.470232 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:28:17.470274 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-01 00:28:17.470284 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:28:17.470291 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-01 00:28:17.470297 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:28:17.470304 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-04-01 00:28:17.470311 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:28:17.470321 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-04-01 00:28:17.470331 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:28:17.470340 | orchestrator | 2026-04-01 00:28:17.470349 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-04-01 00:28:17.470358 | orchestrator | Wednesday 01 April 2026 00:28:17 +0000 (0:00:00.851) 0:00:12.503 ******* 2026-04-01 00:28:17.470367 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:28:17.470377 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:28:17.470396 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:28:17.470406 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:28:17.470416 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:28:17.470427 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:28:17.470438 | orchestrator | 2026-04-01 00:28:17.470449 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-04-01 00:28:17.470460 | orchestrator | Wednesday 01 April 2026 00:28:17 +0000 (0:00:00.136) 0:00:12.640 ******* 2026-04-01 00:28:17.470470 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:28:17.470481 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:28:17.470487 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:28:17.470494 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:28:17.470511 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:28:18.791833 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:28:18.791936 | orchestrator | 2026-04-01 00:28:18.791953 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-04-01 00:28:18.791965 | orchestrator | Wednesday 01 April 2026 00:28:17 +0000 (0:00:00.129) 0:00:12.769 ******* 2026-04-01 00:28:18.791977 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:28:18.791988 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:28:18.791999 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:28:18.792009 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:28:18.792020 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:28:18.792030 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:28:18.792041 | orchestrator | 2026-04-01 00:28:18.792052 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-04-01 00:28:18.792064 | orchestrator | Wednesday 01 April 2026 00:28:17 +0000 (0:00:00.132) 0:00:12.902 ******* 2026-04-01 00:28:18.792074 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:28:18.792085 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:28:18.792096 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:28:18.792106 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:28:18.792117 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:28:18.792127 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:28:18.792139 | orchestrator | 2026-04-01 00:28:18.792150 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-04-01 00:28:18.792161 | orchestrator | Wednesday 01 April 2026 00:28:18 +0000 (0:00:00.791) 0:00:13.693 ******* 2026-04-01 00:28:18.792171 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:28:18.792182 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:28:18.792192 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:28:18.792203 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:28:18.792213 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:28:18.792225 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:28:18.792243 | orchestrator | 2026-04-01 00:28:18.792408 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 00:28:18.792432 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-01 00:28:18.792490 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-01 00:28:18.792504 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-01 00:28:18.792517 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-01 00:28:18.792531 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-01 00:28:18.792544 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-01 00:28:18.792556 | orchestrator | 2026-04-01 00:28:18.792569 | orchestrator | 2026-04-01 00:28:18.792581 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 00:28:18.792593 | orchestrator | Wednesday 01 April 2026 00:28:18 +0000 (0:00:00.218) 0:00:13.912 ******* 2026-04-01 00:28:18.792606 | orchestrator | =============================================================================== 2026-04-01 00:28:18.792618 | orchestrator | Gathering Facts --------------------------------------------------------- 3.38s 2026-04-01 00:28:18.792630 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.44s 2026-04-01 00:28:18.792643 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.38s 2026-04-01 00:28:18.792655 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.25s 2026-04-01 00:28:18.792667 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.96s 2026-04-01 00:28:18.792680 | orchestrator | Do not require tty for all users ---------------------------------------- 0.89s 2026-04-01 00:28:18.792692 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.85s 2026-04-01 00:28:18.792704 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.83s 2026-04-01 00:28:18.792717 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.79s 2026-04-01 00:28:18.792729 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.54s 2026-04-01 00:28:18.792743 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.22s 2026-04-01 00:28:18.792753 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.18s 2026-04-01 00:28:18.792764 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.17s 2026-04-01 00:28:18.792774 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.16s 2026-04-01 00:28:18.792785 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.15s 2026-04-01 00:28:18.792796 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.15s 2026-04-01 00:28:18.792807 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.14s 2026-04-01 00:28:18.792817 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.13s 2026-04-01 00:28:18.792828 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.13s 2026-04-01 00:28:18.979709 | orchestrator | + osism apply --environment custom facts 2026-04-01 00:28:20.244719 | orchestrator | 2026-04-01 00:28:20 | INFO  | Trying to run play facts in environment custom 2026-04-01 00:28:30.377007 | orchestrator | 2026-04-01 00:28:30 | INFO  | Prepare task for execution of facts. 2026-04-01 00:28:30.452327 | orchestrator | 2026-04-01 00:28:30 | INFO  | Task c4c25fdb-1dc7-419e-ba1a-adf17e165746 (facts) was prepared for execution. 2026-04-01 00:28:30.452451 | orchestrator | 2026-04-01 00:28:30 | INFO  | It takes a moment until task c4c25fdb-1dc7-419e-ba1a-adf17e165746 (facts) has been started and output is visible here. 2026-04-01 00:29:17.291483 | orchestrator | 2026-04-01 00:29:17.291576 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-04-01 00:29:17.291588 | orchestrator | 2026-04-01 00:29:17.291596 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-04-01 00:29:17.291682 | orchestrator | Wednesday 01 April 2026 00:28:33 +0000 (0:00:00.117) 0:00:00.117 ******* 2026-04-01 00:29:17.291691 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:29:17.291700 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:29:17.291707 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:29:17.291713 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:29:17.291720 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:29:17.291727 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:29:17.291732 | orchestrator | ok: [testbed-manager] 2026-04-01 00:29:17.291738 | orchestrator | 2026-04-01 00:29:17.291744 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-04-01 00:29:17.291750 | orchestrator | Wednesday 01 April 2026 00:28:34 +0000 (0:00:01.368) 0:00:01.486 ******* 2026-04-01 00:29:17.291757 | orchestrator | ok: [testbed-manager] 2026-04-01 00:29:17.291764 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:29:17.291778 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:29:17.291785 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:29:17.291792 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:29:17.291799 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:29:17.291806 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:29:17.291813 | orchestrator | 2026-04-01 00:29:17.291820 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-04-01 00:29:17.291827 | orchestrator | 2026-04-01 00:29:17.291834 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-04-01 00:29:17.291841 | orchestrator | Wednesday 01 April 2026 00:28:36 +0000 (0:00:01.376) 0:00:02.863 ******* 2026-04-01 00:29:17.291848 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:29:17.291854 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:29:17.291861 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:29:17.291868 | orchestrator | 2026-04-01 00:29:17.291875 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-04-01 00:29:17.291883 | orchestrator | Wednesday 01 April 2026 00:28:36 +0000 (0:00:00.084) 0:00:02.947 ******* 2026-04-01 00:29:17.291890 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:29:17.291897 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:29:17.291904 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:29:17.291911 | orchestrator | 2026-04-01 00:29:17.291918 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-04-01 00:29:17.291925 | orchestrator | Wednesday 01 April 2026 00:28:36 +0000 (0:00:00.192) 0:00:03.139 ******* 2026-04-01 00:29:17.291932 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:29:17.291939 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:29:17.291946 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:29:17.291952 | orchestrator | 2026-04-01 00:29:17.291959 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-04-01 00:29:17.291966 | orchestrator | Wednesday 01 April 2026 00:28:36 +0000 (0:00:00.208) 0:00:03.347 ******* 2026-04-01 00:29:17.291975 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:29:17.291983 | orchestrator | 2026-04-01 00:29:17.291990 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-04-01 00:29:17.291997 | orchestrator | Wednesday 01 April 2026 00:28:36 +0000 (0:00:00.127) 0:00:03.475 ******* 2026-04-01 00:29:17.292004 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:29:17.292011 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:29:17.292018 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:29:17.292025 | orchestrator | 2026-04-01 00:29:17.292032 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-04-01 00:29:17.292056 | orchestrator | Wednesday 01 April 2026 00:28:37 +0000 (0:00:00.489) 0:00:03.965 ******* 2026-04-01 00:29:17.292063 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:29:17.292068 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:29:17.292074 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:29:17.292080 | orchestrator | 2026-04-01 00:29:17.292087 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-04-01 00:29:17.292095 | orchestrator | Wednesday 01 April 2026 00:28:37 +0000 (0:00:00.119) 0:00:04.084 ******* 2026-04-01 00:29:17.292103 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:29:17.292111 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:29:17.292119 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:29:17.292126 | orchestrator | 2026-04-01 00:29:17.292134 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-04-01 00:29:17.292141 | orchestrator | Wednesday 01 April 2026 00:28:38 +0000 (0:00:01.126) 0:00:05.210 ******* 2026-04-01 00:29:17.292150 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:29:17.292158 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:29:17.292165 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:29:17.292173 | orchestrator | 2026-04-01 00:29:17.292181 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-04-01 00:29:17.292189 | orchestrator | Wednesday 01 April 2026 00:28:39 +0000 (0:00:00.484) 0:00:05.695 ******* 2026-04-01 00:29:17.292196 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:29:17.292252 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:29:17.292260 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:29:17.292267 | orchestrator | 2026-04-01 00:29:17.292273 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-04-01 00:29:17.292281 | orchestrator | Wednesday 01 April 2026 00:28:40 +0000 (0:00:01.164) 0:00:06.859 ******* 2026-04-01 00:29:17.292288 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:29:17.292296 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:29:17.292303 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:29:17.292311 | orchestrator | 2026-04-01 00:29:17.292319 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-04-01 00:29:17.292328 | orchestrator | Wednesday 01 April 2026 00:28:57 +0000 (0:00:17.682) 0:00:24.541 ******* 2026-04-01 00:29:17.292337 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:29:17.292344 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:29:17.292352 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:29:17.292360 | orchestrator | 2026-04-01 00:29:17.292367 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-04-01 00:29:17.292393 | orchestrator | Wednesday 01 April 2026 00:28:58 +0000 (0:00:00.098) 0:00:24.640 ******* 2026-04-01 00:29:17.292401 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:29:17.292409 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:29:17.292416 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:29:17.292423 | orchestrator | 2026-04-01 00:29:17.292430 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-04-01 00:29:17.292437 | orchestrator | Wednesday 01 April 2026 00:29:07 +0000 (0:00:09.651) 0:00:34.292 ******* 2026-04-01 00:29:17.292444 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:29:17.292451 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:29:17.292457 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:29:17.292465 | orchestrator | 2026-04-01 00:29:17.292472 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-04-01 00:29:17.292479 | orchestrator | Wednesday 01 April 2026 00:29:08 +0000 (0:00:00.477) 0:00:34.769 ******* 2026-04-01 00:29:17.292485 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-04-01 00:29:17.292491 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-04-01 00:29:17.292503 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-04-01 00:29:17.292510 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-04-01 00:29:17.292524 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-04-01 00:29:17.292532 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-04-01 00:29:17.292538 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-04-01 00:29:17.292545 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-04-01 00:29:17.292552 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-04-01 00:29:17.292559 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-04-01 00:29:17.292565 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-04-01 00:29:17.292572 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-04-01 00:29:17.292579 | orchestrator | 2026-04-01 00:29:17.292586 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-04-01 00:29:17.292593 | orchestrator | Wednesday 01 April 2026 00:29:12 +0000 (0:00:03.821) 0:00:38.591 ******* 2026-04-01 00:29:17.292600 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:29:17.292607 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:29:17.292613 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:29:17.292619 | orchestrator | 2026-04-01 00:29:17.292626 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-01 00:29:17.292633 | orchestrator | 2026-04-01 00:29:17.292640 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-01 00:29:17.292648 | orchestrator | Wednesday 01 April 2026 00:29:13 +0000 (0:00:01.707) 0:00:40.299 ******* 2026-04-01 00:29:17.292655 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:29:17.292662 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:29:17.292669 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:29:17.292675 | orchestrator | ok: [testbed-manager] 2026-04-01 00:29:17.292680 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:29:17.292686 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:29:17.292691 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:29:17.292697 | orchestrator | 2026-04-01 00:29:17.292702 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 00:29:17.292709 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 00:29:17.292715 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 00:29:17.292724 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 00:29:17.292731 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 00:29:17.292738 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-01 00:29:17.292745 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-01 00:29:17.292752 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-01 00:29:17.292758 | orchestrator | 2026-04-01 00:29:17.292765 | orchestrator | 2026-04-01 00:29:17.292772 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 00:29:17.292779 | orchestrator | Wednesday 01 April 2026 00:29:17 +0000 (0:00:03.567) 0:00:43.866 ******* 2026-04-01 00:29:17.292786 | orchestrator | =============================================================================== 2026-04-01 00:29:17.292793 | orchestrator | osism.commons.repository : Update package cache ------------------------ 17.68s 2026-04-01 00:29:17.292800 | orchestrator | Install required packages (Debian) -------------------------------------- 9.65s 2026-04-01 00:29:17.292813 | orchestrator | Copy fact files --------------------------------------------------------- 3.82s 2026-04-01 00:29:17.292820 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.57s 2026-04-01 00:29:17.292827 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.71s 2026-04-01 00:29:17.292834 | orchestrator | Copy fact file ---------------------------------------------------------- 1.38s 2026-04-01 00:29:17.292849 | orchestrator | Create custom facts directory ------------------------------------------- 1.37s 2026-04-01 00:29:17.483878 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.16s 2026-04-01 00:29:17.483997 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.13s 2026-04-01 00:29:17.484016 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.49s 2026-04-01 00:29:17.484030 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.48s 2026-04-01 00:29:17.484043 | orchestrator | Create custom facts directory ------------------------------------------- 0.48s 2026-04-01 00:29:17.484056 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.21s 2026-04-01 00:29:17.484069 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.19s 2026-04-01 00:29:17.484083 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.13s 2026-04-01 00:29:17.484097 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.12s 2026-04-01 00:29:17.484111 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.10s 2026-04-01 00:29:17.484125 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.08s 2026-04-01 00:29:17.654200 | orchestrator | + osism apply bootstrap 2026-04-01 00:29:28.885797 | orchestrator | 2026-04-01 00:29:28 | INFO  | Prepare task for execution of bootstrap. 2026-04-01 00:29:28.964347 | orchestrator | 2026-04-01 00:29:28 | INFO  | Task 42911827-95d8-43ce-b116-09cc5a22d6da (bootstrap) was prepared for execution. 2026-04-01 00:29:28.964449 | orchestrator | 2026-04-01 00:29:28 | INFO  | It takes a moment until task 42911827-95d8-43ce-b116-09cc5a22d6da (bootstrap) has been started and output is visible here. 2026-04-01 00:29:45.077056 | orchestrator | 2026-04-01 00:29:45.077210 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-04-01 00:29:45.077234 | orchestrator | 2026-04-01 00:29:45.077249 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-04-01 00:29:45.077264 | orchestrator | Wednesday 01 April 2026 00:29:32 +0000 (0:00:00.188) 0:00:00.188 ******* 2026-04-01 00:29:45.077280 | orchestrator | ok: [testbed-manager] 2026-04-01 00:29:45.077296 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:29:45.077310 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:29:45.077324 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:29:45.077339 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:29:45.077354 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:29:45.077368 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:29:45.077382 | orchestrator | 2026-04-01 00:29:45.077418 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-01 00:29:45.077433 | orchestrator | 2026-04-01 00:29:45.077447 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-01 00:29:45.077462 | orchestrator | Wednesday 01 April 2026 00:29:32 +0000 (0:00:00.294) 0:00:00.483 ******* 2026-04-01 00:29:45.077477 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:29:45.077491 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:29:45.077506 | orchestrator | ok: [testbed-manager] 2026-04-01 00:29:45.077520 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:29:45.077534 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:29:45.077548 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:29:45.077563 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:29:45.077579 | orchestrator | 2026-04-01 00:29:45.077595 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-04-01 00:29:45.077639 | orchestrator | 2026-04-01 00:29:45.077655 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-01 00:29:45.077671 | orchestrator | Wednesday 01 April 2026 00:29:37 +0000 (0:00:05.060) 0:00:05.544 ******* 2026-04-01 00:29:45.077687 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-04-01 00:29:45.077703 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-04-01 00:29:45.077717 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-04-01 00:29:45.077732 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-04-01 00:29:45.077748 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-01 00:29:45.077764 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-04-01 00:29:45.077778 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-04-01 00:29:45.077793 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-04-01 00:29:45.077809 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-01 00:29:45.077823 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-04-01 00:29:45.077838 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-04-01 00:29:45.077853 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-01 00:29:45.077869 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-01 00:29:45.077883 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-01 00:29:45.077899 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-04-01 00:29:45.077915 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-04-01 00:29:45.077929 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-01 00:29:45.077943 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:29:45.077958 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-04-01 00:29:45.077972 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-04-01 00:29:45.077985 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-01 00:29:45.077999 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-01 00:29:45.078013 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-01 00:29:45.078158 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-01 00:29:45.078174 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-01 00:29:45.078296 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-01 00:29:45.078313 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-01 00:29:45.078327 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-04-01 00:29:45.078357 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-01 00:29:45.078384 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-01 00:29:45.078400 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-01 00:29:45.078414 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-04-01 00:29:45.078427 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-01 00:29:45.078441 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-01 00:29:45.078455 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-01 00:29:45.078479 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-01 00:29:45.078493 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-04-01 00:29:45.078506 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-01 00:29:45.078520 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:29:45.078534 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-04-01 00:29:45.078548 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-01 00:29:45.078563 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-01 00:29:45.078592 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-01 00:29:45.078606 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:29:45.078620 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-04-01 00:29:45.078635 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-01 00:29:45.078676 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-01 00:29:45.078691 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-04-01 00:29:45.078707 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:29:45.078721 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-04-01 00:29:45.078737 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:29:45.078750 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-01 00:29:45.078764 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-01 00:29:45.078779 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:29:45.078792 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-01 00:29:45.078805 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:29:45.078819 | orchestrator | 2026-04-01 00:29:45.078833 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-04-01 00:29:45.078846 | orchestrator | 2026-04-01 00:29:45.078860 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-04-01 00:29:45.078876 | orchestrator | Wednesday 01 April 2026 00:29:38 +0000 (0:00:00.427) 0:00:05.972 ******* 2026-04-01 00:29:45.078890 | orchestrator | ok: [testbed-manager] 2026-04-01 00:29:45.078904 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:29:45.078918 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:29:45.078931 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:29:45.078945 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:29:45.078959 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:29:45.078973 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:29:45.078987 | orchestrator | 2026-04-01 00:29:45.079001 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-04-01 00:29:45.079015 | orchestrator | Wednesday 01 April 2026 00:29:39 +0000 (0:00:01.424) 0:00:07.397 ******* 2026-04-01 00:29:45.079029 | orchestrator | ok: [testbed-manager] 2026-04-01 00:29:45.079043 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:29:45.079057 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:29:45.079070 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:29:45.079083 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:29:45.079097 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:29:45.079111 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:29:45.079125 | orchestrator | 2026-04-01 00:29:45.079139 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-04-01 00:29:45.079153 | orchestrator | Wednesday 01 April 2026 00:29:40 +0000 (0:00:01.274) 0:00:08.671 ******* 2026-04-01 00:29:45.079168 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:29:45.079211 | orchestrator | 2026-04-01 00:29:45.079226 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-04-01 00:29:45.079241 | orchestrator | Wednesday 01 April 2026 00:29:40 +0000 (0:00:00.263) 0:00:08.934 ******* 2026-04-01 00:29:45.079255 | orchestrator | changed: [testbed-manager] 2026-04-01 00:29:45.079269 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:29:45.079283 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:29:45.079297 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:29:45.079311 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:29:45.079324 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:29:45.079338 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:29:45.079352 | orchestrator | 2026-04-01 00:29:45.079367 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-04-01 00:29:45.079380 | orchestrator | Wednesday 01 April 2026 00:29:42 +0000 (0:00:01.576) 0:00:10.511 ******* 2026-04-01 00:29:45.079408 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:29:45.079425 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:29:45.079442 | orchestrator | 2026-04-01 00:29:45.079456 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-04-01 00:29:45.079472 | orchestrator | Wednesday 01 April 2026 00:29:42 +0000 (0:00:00.285) 0:00:10.797 ******* 2026-04-01 00:29:45.079486 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:29:45.079501 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:29:45.079515 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:29:45.079528 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:29:45.079542 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:29:45.079556 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:29:45.079569 | orchestrator | 2026-04-01 00:29:45.079584 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-04-01 00:29:45.079597 | orchestrator | Wednesday 01 April 2026 00:29:43 +0000 (0:00:01.108) 0:00:11.905 ******* 2026-04-01 00:29:45.079612 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:29:45.079625 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:29:45.079640 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:29:45.079654 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:29:45.079668 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:29:45.079682 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:29:45.079695 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:29:45.079710 | orchestrator | 2026-04-01 00:29:45.079734 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-04-01 00:29:45.079749 | orchestrator | Wednesday 01 April 2026 00:29:44 +0000 (0:00:00.627) 0:00:12.533 ******* 2026-04-01 00:29:45.079763 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:29:45.079777 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:29:45.079791 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:29:45.079805 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:29:45.079819 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:29:45.079833 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:29:45.079846 | orchestrator | ok: [testbed-manager] 2026-04-01 00:29:45.079859 | orchestrator | 2026-04-01 00:29:45.079873 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-04-01 00:29:45.079888 | orchestrator | Wednesday 01 April 2026 00:29:44 +0000 (0:00:00.404) 0:00:12.937 ******* 2026-04-01 00:29:45.079902 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:29:45.079915 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:29:45.079946 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:29:57.274876 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:29:57.274972 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:29:57.274982 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:29:57.274988 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:29:57.274994 | orchestrator | 2026-04-01 00:29:57.275001 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-04-01 00:29:57.275009 | orchestrator | Wednesday 01 April 2026 00:29:45 +0000 (0:00:00.196) 0:00:13.134 ******* 2026-04-01 00:29:57.275017 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:29:57.275036 | orchestrator | 2026-04-01 00:29:57.275043 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-04-01 00:29:57.275050 | orchestrator | Wednesday 01 April 2026 00:29:45 +0000 (0:00:00.286) 0:00:13.421 ******* 2026-04-01 00:29:57.275056 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:29:57.275081 | orchestrator | 2026-04-01 00:29:57.275088 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-04-01 00:29:57.275093 | orchestrator | Wednesday 01 April 2026 00:29:45 +0000 (0:00:00.294) 0:00:13.715 ******* 2026-04-01 00:29:57.275099 | orchestrator | ok: [testbed-manager] 2026-04-01 00:29:57.275106 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:29:57.275111 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:29:57.275117 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:29:57.275123 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:29:57.275128 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:29:57.275134 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:29:57.275140 | orchestrator | 2026-04-01 00:29:57.275145 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-04-01 00:29:57.275151 | orchestrator | Wednesday 01 April 2026 00:29:47 +0000 (0:00:01.371) 0:00:15.087 ******* 2026-04-01 00:29:57.275157 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:29:57.275163 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:29:57.275198 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:29:57.275208 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:29:57.275219 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:29:57.275225 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:29:57.275231 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:29:57.275237 | orchestrator | 2026-04-01 00:29:57.275243 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-04-01 00:29:57.275249 | orchestrator | Wednesday 01 April 2026 00:29:47 +0000 (0:00:00.201) 0:00:15.288 ******* 2026-04-01 00:29:57.275254 | orchestrator | ok: [testbed-manager] 2026-04-01 00:29:57.275260 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:29:57.275266 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:29:57.275272 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:29:57.275278 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:29:57.275284 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:29:57.275290 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:29:57.275296 | orchestrator | 2026-04-01 00:29:57.275302 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-04-01 00:29:57.275308 | orchestrator | Wednesday 01 April 2026 00:29:47 +0000 (0:00:00.557) 0:00:15.846 ******* 2026-04-01 00:29:57.275314 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:29:57.275320 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:29:57.275325 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:29:57.275331 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:29:57.275337 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:29:57.275343 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:29:57.275348 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:29:57.275354 | orchestrator | 2026-04-01 00:29:57.275360 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-04-01 00:29:57.275367 | orchestrator | Wednesday 01 April 2026 00:29:48 +0000 (0:00:00.241) 0:00:16.088 ******* 2026-04-01 00:29:57.275373 | orchestrator | ok: [testbed-manager] 2026-04-01 00:29:57.275379 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:29:57.275385 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:29:57.275391 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:29:57.275397 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:29:57.275402 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:29:57.275408 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:29:57.275415 | orchestrator | 2026-04-01 00:29:57.275422 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-04-01 00:29:57.275428 | orchestrator | Wednesday 01 April 2026 00:29:48 +0000 (0:00:00.676) 0:00:16.765 ******* 2026-04-01 00:29:57.275435 | orchestrator | ok: [testbed-manager] 2026-04-01 00:29:57.275442 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:29:57.275448 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:29:57.275461 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:29:57.275470 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:29:57.275480 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:29:57.275495 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:29:57.275505 | orchestrator | 2026-04-01 00:29:57.275515 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-04-01 00:29:57.275524 | orchestrator | Wednesday 01 April 2026 00:29:49 +0000 (0:00:01.167) 0:00:17.932 ******* 2026-04-01 00:29:57.275533 | orchestrator | ok: [testbed-manager] 2026-04-01 00:29:57.275541 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:29:57.275562 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:29:57.275572 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:29:57.275581 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:29:57.275592 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:29:57.275602 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:29:57.275612 | orchestrator | 2026-04-01 00:29:57.275622 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-04-01 00:29:57.275632 | orchestrator | Wednesday 01 April 2026 00:29:51 +0000 (0:00:01.075) 0:00:19.008 ******* 2026-04-01 00:29:57.275654 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:29:57.275662 | orchestrator | 2026-04-01 00:29:57.275668 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-04-01 00:29:57.275675 | orchestrator | Wednesday 01 April 2026 00:29:51 +0000 (0:00:00.296) 0:00:19.304 ******* 2026-04-01 00:29:57.275682 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:29:57.275689 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:29:57.275695 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:29:57.275702 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:29:57.275708 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:29:57.275715 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:29:57.275722 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:29:57.275728 | orchestrator | 2026-04-01 00:29:57.275735 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-04-01 00:29:57.275741 | orchestrator | Wednesday 01 April 2026 00:29:52 +0000 (0:00:01.342) 0:00:20.647 ******* 2026-04-01 00:29:57.275748 | orchestrator | ok: [testbed-manager] 2026-04-01 00:29:57.275754 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:29:57.275761 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:29:57.275768 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:29:57.275774 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:29:57.275781 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:29:57.275786 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:29:57.275792 | orchestrator | 2026-04-01 00:29:57.275798 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-04-01 00:29:57.275804 | orchestrator | Wednesday 01 April 2026 00:29:52 +0000 (0:00:00.239) 0:00:20.887 ******* 2026-04-01 00:29:57.275810 | orchestrator | ok: [testbed-manager] 2026-04-01 00:29:57.275816 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:29:57.275821 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:29:57.275827 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:29:57.275833 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:29:57.275838 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:29:57.275844 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:29:57.275850 | orchestrator | 2026-04-01 00:29:57.275856 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-04-01 00:29:57.275862 | orchestrator | Wednesday 01 April 2026 00:29:53 +0000 (0:00:00.212) 0:00:21.099 ******* 2026-04-01 00:29:57.275868 | orchestrator | ok: [testbed-manager] 2026-04-01 00:29:57.275875 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:29:57.275884 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:29:57.275893 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:29:57.275911 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:29:57.275920 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:29:57.275929 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:29:57.275938 | orchestrator | 2026-04-01 00:29:57.275947 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-04-01 00:29:57.275956 | orchestrator | Wednesday 01 April 2026 00:29:53 +0000 (0:00:00.193) 0:00:21.293 ******* 2026-04-01 00:29:57.275967 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:29:57.275977 | orchestrator | 2026-04-01 00:29:57.275985 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-04-01 00:29:57.275993 | orchestrator | Wednesday 01 April 2026 00:29:53 +0000 (0:00:00.264) 0:00:21.558 ******* 2026-04-01 00:29:57.276002 | orchestrator | ok: [testbed-manager] 2026-04-01 00:29:57.276010 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:29:57.276019 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:29:57.276028 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:29:57.276037 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:29:57.276046 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:29:57.276056 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:29:57.276066 | orchestrator | 2026-04-01 00:29:57.276076 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-04-01 00:29:57.276086 | orchestrator | Wednesday 01 April 2026 00:29:54 +0000 (0:00:00.663) 0:00:22.221 ******* 2026-04-01 00:29:57.276097 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:29:57.276106 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:29:57.276116 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:29:57.276125 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:29:57.276134 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:29:57.276143 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:29:57.276152 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:29:57.276160 | orchestrator | 2026-04-01 00:29:57.276190 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-04-01 00:29:57.276199 | orchestrator | Wednesday 01 April 2026 00:29:54 +0000 (0:00:00.212) 0:00:22.434 ******* 2026-04-01 00:29:57.276208 | orchestrator | ok: [testbed-manager] 2026-04-01 00:29:57.276217 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:29:57.276226 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:29:57.276235 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:29:57.276244 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:29:57.276254 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:29:57.276263 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:29:57.276272 | orchestrator | 2026-04-01 00:29:57.276290 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-04-01 00:29:57.276300 | orchestrator | Wednesday 01 April 2026 00:29:55 +0000 (0:00:01.126) 0:00:23.560 ******* 2026-04-01 00:29:57.276309 | orchestrator | ok: [testbed-manager] 2026-04-01 00:29:57.276319 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:29:57.276328 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:29:57.276338 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:29:57.276347 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:29:57.276356 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:29:57.276365 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:29:57.276374 | orchestrator | 2026-04-01 00:29:57.276384 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-04-01 00:29:57.276393 | orchestrator | Wednesday 01 April 2026 00:29:56 +0000 (0:00:00.641) 0:00:24.202 ******* 2026-04-01 00:29:57.276403 | orchestrator | ok: [testbed-manager] 2026-04-01 00:29:57.276412 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:29:57.276422 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:29:57.276432 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:29:57.276454 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:30:39.409677 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:30:39.409812 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:30:39.409831 | orchestrator | 2026-04-01 00:30:39.409844 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-04-01 00:30:39.409856 | orchestrator | Wednesday 01 April 2026 00:29:57 +0000 (0:00:01.218) 0:00:25.420 ******* 2026-04-01 00:30:39.409866 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:30:39.409875 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:30:39.409885 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:30:39.409896 | orchestrator | changed: [testbed-manager] 2026-04-01 00:30:39.409925 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:30:39.409936 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:30:39.409946 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:30:39.409957 | orchestrator | 2026-04-01 00:30:39.409968 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-04-01 00:30:39.409979 | orchestrator | Wednesday 01 April 2026 00:30:15 +0000 (0:00:17.820) 0:00:43.241 ******* 2026-04-01 00:30:39.409989 | orchestrator | ok: [testbed-manager] 2026-04-01 00:30:39.409999 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:30:39.410009 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:30:39.410067 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:30:39.410079 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:30:39.410099 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:30:39.410110 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:30:39.410121 | orchestrator | 2026-04-01 00:30:39.410153 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-04-01 00:30:39.410163 | orchestrator | Wednesday 01 April 2026 00:30:15 +0000 (0:00:00.219) 0:00:43.460 ******* 2026-04-01 00:30:39.410173 | orchestrator | ok: [testbed-manager] 2026-04-01 00:30:39.410184 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:30:39.410194 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:30:39.410204 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:30:39.410214 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:30:39.410225 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:30:39.410235 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:30:39.410247 | orchestrator | 2026-04-01 00:30:39.410259 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-04-01 00:30:39.410270 | orchestrator | Wednesday 01 April 2026 00:30:15 +0000 (0:00:00.211) 0:00:43.672 ******* 2026-04-01 00:30:39.410281 | orchestrator | ok: [testbed-manager] 2026-04-01 00:30:39.410292 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:30:39.410303 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:30:39.410314 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:30:39.410324 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:30:39.410335 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:30:39.410345 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:30:39.410356 | orchestrator | 2026-04-01 00:30:39.410367 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-04-01 00:30:39.410378 | orchestrator | Wednesday 01 April 2026 00:30:15 +0000 (0:00:00.217) 0:00:43.889 ******* 2026-04-01 00:30:39.410391 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:30:39.410403 | orchestrator | 2026-04-01 00:30:39.410414 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-04-01 00:30:39.410424 | orchestrator | Wednesday 01 April 2026 00:30:16 +0000 (0:00:00.287) 0:00:44.177 ******* 2026-04-01 00:30:39.410435 | orchestrator | ok: [testbed-manager] 2026-04-01 00:30:39.410445 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:30:39.410455 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:30:39.410466 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:30:39.410475 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:30:39.410482 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:30:39.410490 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:30:39.410497 | orchestrator | 2026-04-01 00:30:39.410503 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-04-01 00:30:39.410519 | orchestrator | Wednesday 01 April 2026 00:30:18 +0000 (0:00:02.003) 0:00:46.180 ******* 2026-04-01 00:30:39.410547 | orchestrator | changed: [testbed-manager] 2026-04-01 00:30:39.410553 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:30:39.410570 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:30:39.410576 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:30:39.410582 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:30:39.410589 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:30:39.410595 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:30:39.410601 | orchestrator | 2026-04-01 00:30:39.410607 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-04-01 00:30:39.410613 | orchestrator | Wednesday 01 April 2026 00:30:19 +0000 (0:00:01.220) 0:00:47.401 ******* 2026-04-01 00:30:39.410621 | orchestrator | ok: [testbed-manager] 2026-04-01 00:30:39.410632 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:30:39.410642 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:30:39.410651 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:30:39.410661 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:30:39.410670 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:30:39.410679 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:30:39.410688 | orchestrator | 2026-04-01 00:30:39.410697 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-04-01 00:30:39.410723 | orchestrator | Wednesday 01 April 2026 00:30:20 +0000 (0:00:00.878) 0:00:48.280 ******* 2026-04-01 00:30:39.410735 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:30:39.410747 | orchestrator | 2026-04-01 00:30:39.410753 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-04-01 00:30:39.410760 | orchestrator | Wednesday 01 April 2026 00:30:20 +0000 (0:00:00.316) 0:00:48.597 ******* 2026-04-01 00:30:39.410766 | orchestrator | changed: [testbed-manager] 2026-04-01 00:30:39.410773 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:30:39.410779 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:30:39.410785 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:30:39.410791 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:30:39.410800 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:30:39.410810 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:30:39.410820 | orchestrator | 2026-04-01 00:30:39.410849 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-04-01 00:30:39.410859 | orchestrator | Wednesday 01 April 2026 00:30:21 +0000 (0:00:01.038) 0:00:49.635 ******* 2026-04-01 00:30:39.410869 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:30:39.410878 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:30:39.410885 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:30:39.410891 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:30:39.410897 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:30:39.410903 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:30:39.410909 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:30:39.410915 | orchestrator | 2026-04-01 00:30:39.410921 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-04-01 00:30:39.410928 | orchestrator | Wednesday 01 April 2026 00:30:21 +0000 (0:00:00.221) 0:00:49.856 ******* 2026-04-01 00:30:39.410934 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:30:39.410941 | orchestrator | 2026-04-01 00:30:39.410947 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-04-01 00:30:39.410953 | orchestrator | Wednesday 01 April 2026 00:30:22 +0000 (0:00:00.319) 0:00:50.176 ******* 2026-04-01 00:30:39.410959 | orchestrator | ok: [testbed-manager] 2026-04-01 00:30:39.410965 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:30:39.410977 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:30:39.410983 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:30:39.410990 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:30:39.410996 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:30:39.411002 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:30:39.411008 | orchestrator | 2026-04-01 00:30:39.411014 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-04-01 00:30:39.411020 | orchestrator | Wednesday 01 April 2026 00:30:24 +0000 (0:00:01.878) 0:00:52.055 ******* 2026-04-01 00:30:39.411027 | orchestrator | changed: [testbed-manager] 2026-04-01 00:30:39.411033 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:30:39.411039 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:30:39.411045 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:30:39.411051 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:30:39.411058 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:30:39.411064 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:30:39.411070 | orchestrator | 2026-04-01 00:30:39.411076 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-04-01 00:30:39.411082 | orchestrator | Wednesday 01 April 2026 00:30:25 +0000 (0:00:01.252) 0:00:53.307 ******* 2026-04-01 00:30:39.411088 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:30:39.411094 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:30:39.411100 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:30:39.411107 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:30:39.411113 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:30:39.411119 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:30:39.411142 | orchestrator | changed: [testbed-manager] 2026-04-01 00:30:39.411149 | orchestrator | 2026-04-01 00:30:39.411156 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-04-01 00:30:39.411162 | orchestrator | Wednesday 01 April 2026 00:30:36 +0000 (0:00:11.097) 0:01:04.405 ******* 2026-04-01 00:30:39.411168 | orchestrator | ok: [testbed-manager] 2026-04-01 00:30:39.411174 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:30:39.411180 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:30:39.411187 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:30:39.411193 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:30:39.411199 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:30:39.411205 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:30:39.411211 | orchestrator | 2026-04-01 00:30:39.411217 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-04-01 00:30:39.411223 | orchestrator | Wednesday 01 April 2026 00:30:37 +0000 (0:00:01.234) 0:01:05.639 ******* 2026-04-01 00:30:39.411229 | orchestrator | ok: [testbed-manager] 2026-04-01 00:30:39.411237 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:30:39.411246 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:30:39.411259 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:30:39.411272 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:30:39.411283 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:30:39.411292 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:30:39.411301 | orchestrator | 2026-04-01 00:30:39.411311 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-04-01 00:30:39.411320 | orchestrator | Wednesday 01 April 2026 00:30:38 +0000 (0:00:00.998) 0:01:06.638 ******* 2026-04-01 00:30:39.411329 | orchestrator | ok: [testbed-manager] 2026-04-01 00:30:39.411338 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:30:39.411348 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:30:39.411357 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:30:39.411366 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:30:39.411375 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:30:39.411386 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:30:39.411395 | orchestrator | 2026-04-01 00:30:39.411404 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-04-01 00:30:39.411414 | orchestrator | Wednesday 01 April 2026 00:30:38 +0000 (0:00:00.224) 0:01:06.862 ******* 2026-04-01 00:30:39.411424 | orchestrator | ok: [testbed-manager] 2026-04-01 00:30:39.411442 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:30:39.411453 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:30:39.411459 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:30:39.411465 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:30:39.411471 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:30:39.411477 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:30:39.411484 | orchestrator | 2026-04-01 00:30:39.411490 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-04-01 00:30:39.411496 | orchestrator | Wednesday 01 April 2026 00:30:39 +0000 (0:00:00.235) 0:01:07.098 ******* 2026-04-01 00:30:39.411503 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:30:39.411509 | orchestrator | 2026-04-01 00:30:39.411523 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-04-01 00:33:04.311770 | orchestrator | Wednesday 01 April 2026 00:30:39 +0000 (0:00:00.276) 0:01:07.374 ******* 2026-04-01 00:33:04.311920 | orchestrator | ok: [testbed-manager] 2026-04-01 00:33:04.311946 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:33:04.311953 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:33:04.311958 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:33:04.311964 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:33:04.311970 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:33:04.311976 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:33:04.311981 | orchestrator | 2026-04-01 00:33:04.311988 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-04-01 00:33:04.311994 | orchestrator | Wednesday 01 April 2026 00:30:41 +0000 (0:00:02.013) 0:01:09.387 ******* 2026-04-01 00:33:04.312001 | orchestrator | changed: [testbed-manager] 2026-04-01 00:33:04.312007 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:33:04.312013 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:33:04.312018 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:33:04.312024 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:33:04.312029 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:33:04.312035 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:33:04.312040 | orchestrator | 2026-04-01 00:33:04.312046 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-04-01 00:33:04.312052 | orchestrator | Wednesday 01 April 2026 00:30:42 +0000 (0:00:00.725) 0:01:10.112 ******* 2026-04-01 00:33:04.312058 | orchestrator | ok: [testbed-manager] 2026-04-01 00:33:04.312064 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:33:04.312070 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:33:04.312075 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:33:04.312081 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:33:04.312086 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:33:04.312092 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:33:04.312097 | orchestrator | 2026-04-01 00:33:04.312103 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-04-01 00:33:04.312109 | orchestrator | Wednesday 01 April 2026 00:30:42 +0000 (0:00:00.233) 0:01:10.346 ******* 2026-04-01 00:33:04.312114 | orchestrator | ok: [testbed-manager] 2026-04-01 00:33:04.312120 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:33:04.312125 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:33:04.312131 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:33:04.312136 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:33:04.312142 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:33:04.312147 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:33:04.312153 | orchestrator | 2026-04-01 00:33:04.312158 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-04-01 00:33:04.312164 | orchestrator | Wednesday 01 April 2026 00:30:43 +0000 (0:00:01.462) 0:01:11.809 ******* 2026-04-01 00:33:04.312170 | orchestrator | changed: [testbed-manager] 2026-04-01 00:33:04.312178 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:33:04.312184 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:33:04.312207 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:33:04.312213 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:33:04.312219 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:33:04.312224 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:33:04.312229 | orchestrator | 2026-04-01 00:33:04.312235 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-04-01 00:33:04.312240 | orchestrator | Wednesday 01 April 2026 00:30:46 +0000 (0:00:02.236) 0:01:14.046 ******* 2026-04-01 00:33:04.312246 | orchestrator | ok: [testbed-manager] 2026-04-01 00:33:04.312251 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:33:04.312257 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:33:04.312262 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:33:04.312268 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:33:04.312273 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:33:04.312279 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:33:04.312284 | orchestrator | 2026-04-01 00:33:04.312290 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-04-01 00:33:04.312295 | orchestrator | Wednesday 01 April 2026 00:30:49 +0000 (0:00:02.950) 0:01:16.996 ******* 2026-04-01 00:33:04.312301 | orchestrator | ok: [testbed-manager] 2026-04-01 00:33:04.312306 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:33:04.312312 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:33:04.312317 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:33:04.312322 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:33:04.312328 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:33:04.312333 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:33:04.312339 | orchestrator | 2026-04-01 00:33:04.312346 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-04-01 00:33:04.312352 | orchestrator | Wednesday 01 April 2026 00:31:28 +0000 (0:00:39.169) 0:01:56.166 ******* 2026-04-01 00:33:04.312359 | orchestrator | changed: [testbed-manager] 2026-04-01 00:33:04.312365 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:33:04.312372 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:33:04.312378 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:33:04.312384 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:33:04.312391 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:33:04.312397 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:33:04.312403 | orchestrator | 2026-04-01 00:33:04.312410 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-04-01 00:33:04.312416 | orchestrator | Wednesday 01 April 2026 00:32:49 +0000 (0:01:21.570) 0:03:17.736 ******* 2026-04-01 00:33:04.312423 | orchestrator | ok: [testbed-manager] 2026-04-01 00:33:04.312429 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:33:04.312439 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:33:04.312446 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:33:04.312452 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:33:04.312459 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:33:04.312465 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:33:04.312472 | orchestrator | 2026-04-01 00:33:04.312478 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-04-01 00:33:04.312484 | orchestrator | Wednesday 01 April 2026 00:32:51 +0000 (0:00:02.061) 0:03:19.797 ******* 2026-04-01 00:33:04.312489 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:33:04.312495 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:33:04.312500 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:33:04.312505 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:33:04.312511 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:33:04.312516 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:33:04.312522 | orchestrator | changed: [testbed-manager] 2026-04-01 00:33:04.312527 | orchestrator | 2026-04-01 00:33:04.312532 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-04-01 00:33:04.312538 | orchestrator | Wednesday 01 April 2026 00:33:03 +0000 (0:00:11.429) 0:03:31.226 ******* 2026-04-01 00:33:04.312562 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-04-01 00:33:04.312577 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-04-01 00:33:04.312585 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-04-01 00:33:04.312595 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-04-01 00:33:04.312601 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-04-01 00:33:04.312606 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-04-01 00:33:04.312612 | orchestrator | 2026-04-01 00:33:04.312618 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-04-01 00:33:04.312623 | orchestrator | Wednesday 01 April 2026 00:33:03 +0000 (0:00:00.385) 0:03:31.612 ******* 2026-04-01 00:33:04.312629 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-04-01 00:33:04.312635 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:33:04.312640 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-04-01 00:33:04.312645 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-04-01 00:33:04.312651 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:33:04.312656 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:33:04.312662 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-04-01 00:33:04.312667 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:33:04.312672 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-01 00:33:04.312678 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-01 00:33:04.312683 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-01 00:33:04.312689 | orchestrator | 2026-04-01 00:33:04.312697 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-04-01 00:33:04.312703 | orchestrator | Wednesday 01 April 2026 00:33:04 +0000 (0:00:00.608) 0:03:32.220 ******* 2026-04-01 00:33:04.312713 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-04-01 00:33:04.312719 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-04-01 00:33:04.312725 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-04-01 00:33:04.312730 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-04-01 00:33:04.312735 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-04-01 00:33:04.312744 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-04-01 00:33:11.324399 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-04-01 00:33:11.324523 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-04-01 00:33:11.324539 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-04-01 00:33:11.324552 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-04-01 00:33:11.324569 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:33:11.324587 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-04-01 00:33:11.324603 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-04-01 00:33:11.324619 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-04-01 00:33:11.324634 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-04-01 00:33:11.324649 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-04-01 00:33:11.324665 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-04-01 00:33:11.324678 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-04-01 00:33:11.324693 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-04-01 00:33:11.324709 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-04-01 00:33:11.324725 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-04-01 00:33:11.324744 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-04-01 00:33:11.324756 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-04-01 00:33:11.324765 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-04-01 00:33:11.324774 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-04-01 00:33:11.324788 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-04-01 00:33:11.324803 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-04-01 00:33:11.324819 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-04-01 00:33:11.324916 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:33:11.324937 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-04-01 00:33:11.324953 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-04-01 00:33:11.324969 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-04-01 00:33:11.324985 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:33:11.324999 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-04-01 00:33:11.325039 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-04-01 00:33:11.325056 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-04-01 00:33:11.325072 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-04-01 00:33:11.325088 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-04-01 00:33:11.325097 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-04-01 00:33:11.325106 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-04-01 00:33:11.325114 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-04-01 00:33:11.325129 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-04-01 00:33:11.325143 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-04-01 00:33:11.325157 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:33:11.325172 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-04-01 00:33:11.325187 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-04-01 00:33:11.325200 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-04-01 00:33:11.325215 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-04-01 00:33:11.325224 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-04-01 00:33:11.325250 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-04-01 00:33:11.325259 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-04-01 00:33:11.325268 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-04-01 00:33:11.325277 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-04-01 00:33:11.325285 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-04-01 00:33:11.325294 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-04-01 00:33:11.325302 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-04-01 00:33:11.325311 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-04-01 00:33:11.325319 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-04-01 00:33:11.325327 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-04-01 00:33:11.325336 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-04-01 00:33:11.325344 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-04-01 00:33:11.325353 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-04-01 00:33:11.325361 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-04-01 00:33:11.325370 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-04-01 00:33:11.325378 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-04-01 00:33:11.325387 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-04-01 00:33:11.325404 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-04-01 00:33:11.325413 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-04-01 00:33:11.325422 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-04-01 00:33:11.325431 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-04-01 00:33:11.325439 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-04-01 00:33:11.325448 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-04-01 00:33:11.325471 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-04-01 00:33:11.325480 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-04-01 00:33:11.325489 | orchestrator | 2026-04-01 00:33:11.325498 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-04-01 00:33:11.325507 | orchestrator | Wednesday 01 April 2026 00:33:09 +0000 (0:00:04.812) 0:03:37.033 ******* 2026-04-01 00:33:11.325516 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-01 00:33:11.325540 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-01 00:33:11.325549 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-01 00:33:11.325558 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-01 00:33:11.325576 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-01 00:33:11.325585 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-01 00:33:11.325594 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-01 00:33:11.325602 | orchestrator | 2026-04-01 00:33:11.325611 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-04-01 00:33:11.325620 | orchestrator | Wednesday 01 April 2026 00:33:09 +0000 (0:00:00.611) 0:03:37.645 ******* 2026-04-01 00:33:11.325634 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-01 00:33:11.325643 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-01 00:33:11.325652 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:33:11.325661 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:33:11.325669 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-01 00:33:11.325678 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-01 00:33:11.325687 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:33:11.325696 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:33:11.325705 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-01 00:33:11.325713 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-01 00:33:11.325728 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-01 00:33:24.100875 | orchestrator | 2026-04-01 00:33:24.101007 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-04-01 00:33:24.101030 | orchestrator | Wednesday 01 April 2026 00:33:11 +0000 (0:00:01.686) 0:03:39.331 ******* 2026-04-01 00:33:24.101046 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-01 00:33:24.101066 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:33:24.101085 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-01 00:33:24.101138 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:33:24.101158 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-01 00:33:24.101178 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:33:24.101192 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-01 00:33:24.101203 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:33:24.101215 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-01 00:33:24.101226 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-01 00:33:24.101237 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-01 00:33:24.101248 | orchestrator | 2026-04-01 00:33:24.101259 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-04-01 00:33:24.101269 | orchestrator | Wednesday 01 April 2026 00:33:11 +0000 (0:00:00.589) 0:03:39.921 ******* 2026-04-01 00:33:24.101280 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-04-01 00:33:24.101291 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:33:24.101302 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-04-01 00:33:24.101313 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-04-01 00:33:24.101324 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:33:24.101335 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:33:24.101346 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-04-01 00:33:24.101357 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:33:24.101370 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-04-01 00:33:24.101382 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-04-01 00:33:24.101395 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-04-01 00:33:24.101408 | orchestrator | 2026-04-01 00:33:24.101420 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-04-01 00:33:24.101433 | orchestrator | Wednesday 01 April 2026 00:33:12 +0000 (0:00:00.701) 0:03:40.622 ******* 2026-04-01 00:33:24.101446 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:33:24.101458 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:33:24.101470 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:33:24.101483 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:33:24.101495 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:33:24.101507 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:33:24.101521 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:33:24.101533 | orchestrator | 2026-04-01 00:33:24.101546 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-04-01 00:33:24.101558 | orchestrator | Wednesday 01 April 2026 00:33:12 +0000 (0:00:00.271) 0:03:40.894 ******* 2026-04-01 00:33:24.101571 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:33:24.101585 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:33:24.101597 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:33:24.101610 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:33:24.101623 | orchestrator | ok: [testbed-manager] 2026-04-01 00:33:24.101635 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:33:24.101647 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:33:24.101659 | orchestrator | 2026-04-01 00:33:24.101671 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-04-01 00:33:24.101684 | orchestrator | Wednesday 01 April 2026 00:33:18 +0000 (0:00:05.412) 0:03:46.306 ******* 2026-04-01 00:33:24.101696 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-04-01 00:33:24.101719 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-04-01 00:33:24.101747 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:33:24.101760 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-04-01 00:33:24.101771 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:33:24.101782 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-04-01 00:33:24.101792 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:33:24.101803 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-04-01 00:33:24.101873 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:33:24.101886 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-04-01 00:33:24.101897 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:33:24.101908 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:33:24.101919 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-04-01 00:33:24.101930 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:33:24.101940 | orchestrator | 2026-04-01 00:33:24.101951 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-04-01 00:33:24.101962 | orchestrator | Wednesday 01 April 2026 00:33:18 +0000 (0:00:00.283) 0:03:46.590 ******* 2026-04-01 00:33:24.101973 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-04-01 00:33:24.101984 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-04-01 00:33:24.101995 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-04-01 00:33:24.102099 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-04-01 00:33:24.102114 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-04-01 00:33:24.102125 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-04-01 00:33:24.102135 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-04-01 00:33:24.102146 | orchestrator | 2026-04-01 00:33:24.102157 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-04-01 00:33:24.102168 | orchestrator | Wednesday 01 April 2026 00:33:19 +0000 (0:00:01.090) 0:03:47.681 ******* 2026-04-01 00:33:24.102181 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:33:24.102195 | orchestrator | 2026-04-01 00:33:24.102206 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-04-01 00:33:24.102216 | orchestrator | Wednesday 01 April 2026 00:33:20 +0000 (0:00:00.364) 0:03:48.046 ******* 2026-04-01 00:33:24.102227 | orchestrator | ok: [testbed-manager] 2026-04-01 00:33:24.102238 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:33:24.102249 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:33:24.102259 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:33:24.102270 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:33:24.102280 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:33:24.102291 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:33:24.102301 | orchestrator | 2026-04-01 00:33:24.102312 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-04-01 00:33:24.102323 | orchestrator | Wednesday 01 April 2026 00:33:21 +0000 (0:00:01.536) 0:03:49.582 ******* 2026-04-01 00:33:24.102334 | orchestrator | ok: [testbed-manager] 2026-04-01 00:33:24.102344 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:33:24.102355 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:33:24.102365 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:33:24.102376 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:33:24.102386 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:33:24.102397 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:33:24.102407 | orchestrator | 2026-04-01 00:33:24.102418 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-04-01 00:33:24.102429 | orchestrator | Wednesday 01 April 2026 00:33:22 +0000 (0:00:00.645) 0:03:50.228 ******* 2026-04-01 00:33:24.102439 | orchestrator | changed: [testbed-manager] 2026-04-01 00:33:24.102450 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:33:24.102461 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:33:24.102483 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:33:24.102617 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:33:24.102634 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:33:24.102645 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:33:24.102656 | orchestrator | 2026-04-01 00:33:24.102667 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-04-01 00:33:24.102678 | orchestrator | Wednesday 01 April 2026 00:33:22 +0000 (0:00:00.686) 0:03:50.914 ******* 2026-04-01 00:33:24.102689 | orchestrator | ok: [testbed-manager] 2026-04-01 00:33:24.102699 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:33:24.102710 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:33:24.102721 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:33:24.102732 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:33:24.102742 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:33:24.102753 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:33:24.102763 | orchestrator | 2026-04-01 00:33:24.102775 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-04-01 00:33:24.102786 | orchestrator | Wednesday 01 April 2026 00:33:23 +0000 (0:00:00.613) 0:03:51.527 ******* 2026-04-01 00:33:24.102802 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775001940.6566904, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-01 00:33:24.102847 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775001895.151064, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-01 00:33:24.102860 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775001961.9754188, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-01 00:33:24.102899 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775001990.1855369, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-01 00:33:29.753593 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775002006.4079304, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-01 00:33:29.753751 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775001960.5441294, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-01 00:33:29.753774 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775001963.1160254, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-01 00:33:29.753786 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-01 00:33:29.753798 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-01 00:33:29.753863 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-01 00:33:29.753875 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-01 00:33:29.753922 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-01 00:33:29.753944 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-01 00:33:29.753956 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-01 00:33:29.753969 | orchestrator | 2026-04-01 00:33:29.753993 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-04-01 00:33:29.754006 | orchestrator | Wednesday 01 April 2026 00:33:24 +0000 (0:00:01.156) 0:03:52.684 ******* 2026-04-01 00:33:29.754077 | orchestrator | changed: [testbed-manager] 2026-04-01 00:33:29.754091 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:33:29.754102 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:33:29.754113 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:33:29.754126 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:33:29.754139 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:33:29.754151 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:33:29.754163 | orchestrator | 2026-04-01 00:33:29.754176 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-04-01 00:33:29.754188 | orchestrator | Wednesday 01 April 2026 00:33:25 +0000 (0:00:01.149) 0:03:53.833 ******* 2026-04-01 00:33:29.754201 | orchestrator | changed: [testbed-manager] 2026-04-01 00:33:29.754214 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:33:29.754227 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:33:29.754238 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:33:29.754252 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:33:29.754264 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:33:29.754277 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:33:29.754289 | orchestrator | 2026-04-01 00:33:29.754302 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-04-01 00:33:29.754314 | orchestrator | Wednesday 01 April 2026 00:33:27 +0000 (0:00:01.236) 0:03:55.070 ******* 2026-04-01 00:33:29.754326 | orchestrator | changed: [testbed-manager] 2026-04-01 00:33:29.754339 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:33:29.754351 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:33:29.754364 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:33:29.754376 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:33:29.754389 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:33:29.754460 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:33:29.754473 | orchestrator | 2026-04-01 00:33:29.754490 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-04-01 00:33:29.754502 | orchestrator | Wednesday 01 April 2026 00:33:28 +0000 (0:00:01.197) 0:03:56.268 ******* 2026-04-01 00:33:29.754513 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:33:29.754523 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:33:29.754534 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:33:29.754545 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:33:29.754556 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:33:29.754566 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:33:29.754577 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:33:29.754596 | orchestrator | 2026-04-01 00:33:29.754607 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-04-01 00:33:29.754618 | orchestrator | Wednesday 01 April 2026 00:33:28 +0000 (0:00:00.327) 0:03:56.595 ******* 2026-04-01 00:33:29.754629 | orchestrator | ok: [testbed-manager] 2026-04-01 00:33:29.754641 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:33:29.754652 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:33:29.754662 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:33:29.754673 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:33:29.754738 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:33:29.754784 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:33:29.754798 | orchestrator | 2026-04-01 00:33:29.754832 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-04-01 00:33:29.754844 | orchestrator | Wednesday 01 April 2026 00:33:29 +0000 (0:00:00.761) 0:03:57.356 ******* 2026-04-01 00:33:29.754858 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:33:29.754872 | orchestrator | 2026-04-01 00:33:29.754883 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-04-01 00:33:29.754905 | orchestrator | Wednesday 01 April 2026 00:33:29 +0000 (0:00:00.362) 0:03:57.718 ******* 2026-04-01 00:34:53.913505 | orchestrator | ok: [testbed-manager] 2026-04-01 00:34:53.913608 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:34:53.913629 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:34:53.913731 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:34:53.913753 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:34:53.913770 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:34:53.913782 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:34:53.913794 | orchestrator | 2026-04-01 00:34:53.913806 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-04-01 00:34:53.913818 | orchestrator | Wednesday 01 April 2026 00:33:39 +0000 (0:00:09.825) 0:04:07.543 ******* 2026-04-01 00:34:53.913829 | orchestrator | ok: [testbed-manager] 2026-04-01 00:34:53.913840 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:34:53.913851 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:34:53.913862 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:34:53.913873 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:34:53.913883 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:34:53.913894 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:34:53.913905 | orchestrator | 2026-04-01 00:34:53.913916 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-04-01 00:34:53.913927 | orchestrator | Wednesday 01 April 2026 00:33:41 +0000 (0:00:01.530) 0:04:09.074 ******* 2026-04-01 00:34:53.913938 | orchestrator | ok: [testbed-manager] 2026-04-01 00:34:53.913949 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:34:53.913959 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:34:53.913970 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:34:53.913981 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:34:53.913991 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:34:53.914002 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:34:53.914014 | orchestrator | 2026-04-01 00:34:53.914089 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-04-01 00:34:53.914102 | orchestrator | Wednesday 01 April 2026 00:33:42 +0000 (0:00:01.055) 0:04:10.130 ******* 2026-04-01 00:34:53.914114 | orchestrator | ok: [testbed-manager] 2026-04-01 00:34:53.914127 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:34:53.914139 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:34:53.914152 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:34:53.914165 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:34:53.914177 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:34:53.914190 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:34:53.914202 | orchestrator | 2026-04-01 00:34:53.914216 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-04-01 00:34:53.914256 | orchestrator | Wednesday 01 April 2026 00:33:42 +0000 (0:00:00.265) 0:04:10.396 ******* 2026-04-01 00:34:53.914268 | orchestrator | ok: [testbed-manager] 2026-04-01 00:34:53.914281 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:34:53.914293 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:34:53.914306 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:34:53.914318 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:34:53.914330 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:34:53.914343 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:34:53.914354 | orchestrator | 2026-04-01 00:34:53.914365 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-04-01 00:34:53.914375 | orchestrator | Wednesday 01 April 2026 00:33:42 +0000 (0:00:00.290) 0:04:10.686 ******* 2026-04-01 00:34:53.914386 | orchestrator | ok: [testbed-manager] 2026-04-01 00:34:53.914397 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:34:53.914407 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:34:53.914418 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:34:53.914428 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:34:53.914439 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:34:53.914449 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:34:53.914460 | orchestrator | 2026-04-01 00:34:53.914470 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-04-01 00:34:53.914481 | orchestrator | Wednesday 01 April 2026 00:33:42 +0000 (0:00:00.283) 0:04:10.969 ******* 2026-04-01 00:34:53.914492 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:34:53.914502 | orchestrator | ok: [testbed-manager] 2026-04-01 00:34:53.914513 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:34:53.914523 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:34:53.914534 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:34:53.914544 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:34:53.914555 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:34:53.914565 | orchestrator | 2026-04-01 00:34:53.914576 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-04-01 00:34:53.914601 | orchestrator | Wednesday 01 April 2026 00:33:48 +0000 (0:00:05.867) 0:04:16.837 ******* 2026-04-01 00:34:53.914615 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:34:53.914628 | orchestrator | 2026-04-01 00:34:53.914640 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-04-01 00:34:53.914705 | orchestrator | Wednesday 01 April 2026 00:33:49 +0000 (0:00:00.379) 0:04:17.216 ******* 2026-04-01 00:34:53.914717 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-04-01 00:34:53.914728 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-04-01 00:34:53.914740 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:34:53.914750 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-04-01 00:34:53.914761 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-04-01 00:34:53.914772 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:34:53.914783 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-04-01 00:34:53.914794 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-04-01 00:34:53.914805 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:34:53.914815 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-04-01 00:34:53.914826 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-04-01 00:34:53.914837 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:34:53.914848 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-04-01 00:34:53.914858 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-04-01 00:34:53.914869 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-04-01 00:34:53.914880 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-04-01 00:34:53.914921 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:34:53.914933 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:34:53.914944 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-04-01 00:34:53.914954 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-04-01 00:34:53.914965 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:34:53.914976 | orchestrator | 2026-04-01 00:34:53.914987 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-04-01 00:34:53.914998 | orchestrator | Wednesday 01 April 2026 00:33:49 +0000 (0:00:00.321) 0:04:17.538 ******* 2026-04-01 00:34:53.915009 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:34:53.915020 | orchestrator | 2026-04-01 00:34:53.915031 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-04-01 00:34:53.915041 | orchestrator | Wednesday 01 April 2026 00:33:50 +0000 (0:00:00.494) 0:04:18.032 ******* 2026-04-01 00:34:53.915052 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-04-01 00:34:53.915063 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:34:53.915073 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-04-01 00:34:53.915084 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-04-01 00:34:53.915095 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:34:53.915106 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-04-01 00:34:53.915116 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:34:53.915127 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-04-01 00:34:53.915138 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:34:53.915149 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:34:53.915159 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-04-01 00:34:53.915170 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:34:53.915181 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-04-01 00:34:53.915191 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:34:53.915202 | orchestrator | 2026-04-01 00:34:53.915213 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-04-01 00:34:53.915223 | orchestrator | Wednesday 01 April 2026 00:33:50 +0000 (0:00:00.288) 0:04:18.321 ******* 2026-04-01 00:34:53.915234 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:34:53.915245 | orchestrator | 2026-04-01 00:34:53.915256 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-04-01 00:34:53.915266 | orchestrator | Wednesday 01 April 2026 00:33:50 +0000 (0:00:00.402) 0:04:18.723 ******* 2026-04-01 00:34:53.915277 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:34:53.915288 | orchestrator | changed: [testbed-manager] 2026-04-01 00:34:53.915298 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:34:53.915309 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:34:53.915319 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:34:53.915330 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:34:53.915340 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:34:53.915351 | orchestrator | 2026-04-01 00:34:53.915361 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-04-01 00:34:53.915372 | orchestrator | Wednesday 01 April 2026 00:34:26 +0000 (0:00:35.651) 0:04:54.374 ******* 2026-04-01 00:34:53.915383 | orchestrator | changed: [testbed-manager] 2026-04-01 00:34:53.915393 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:34:53.915404 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:34:53.915420 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:34:53.915431 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:34:53.915460 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:34:53.915471 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:34:53.915481 | orchestrator | 2026-04-01 00:34:53.915492 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-04-01 00:34:53.915503 | orchestrator | Wednesday 01 April 2026 00:34:35 +0000 (0:00:09.365) 0:05:03.739 ******* 2026-04-01 00:34:53.915513 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:34:53.915524 | orchestrator | changed: [testbed-manager] 2026-04-01 00:34:53.915535 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:34:53.915545 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:34:53.915555 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:34:53.915566 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:34:53.915576 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:34:53.915587 | orchestrator | 2026-04-01 00:34:53.915598 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-04-01 00:34:53.915608 | orchestrator | Wednesday 01 April 2026 00:34:44 +0000 (0:00:09.120) 0:05:12.860 ******* 2026-04-01 00:34:53.915619 | orchestrator | ok: [testbed-manager] 2026-04-01 00:34:53.915630 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:34:53.915640 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:34:53.915667 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:34:53.915678 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:34:53.915689 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:34:53.915699 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:34:53.915710 | orchestrator | 2026-04-01 00:34:53.915721 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-04-01 00:34:53.915731 | orchestrator | Wednesday 01 April 2026 00:34:46 +0000 (0:00:02.086) 0:05:14.946 ******* 2026-04-01 00:34:53.915742 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:34:53.915753 | orchestrator | changed: [testbed-manager] 2026-04-01 00:34:53.915763 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:34:53.915774 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:34:53.915785 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:34:53.915795 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:34:53.915806 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:34:53.915816 | orchestrator | 2026-04-01 00:34:53.915833 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-04-01 00:35:05.510917 | orchestrator | Wednesday 01 April 2026 00:34:53 +0000 (0:00:06.933) 0:05:21.880 ******* 2026-04-01 00:35:05.511045 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:35:05.511064 | orchestrator | 2026-04-01 00:35:05.511076 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-04-01 00:35:05.511088 | orchestrator | Wednesday 01 April 2026 00:34:54 +0000 (0:00:00.348) 0:05:22.229 ******* 2026-04-01 00:35:05.511099 | orchestrator | changed: [testbed-manager] 2026-04-01 00:35:05.511112 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:35:05.511123 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:35:05.511133 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:35:05.511144 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:35:05.511155 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:35:05.511166 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:35:05.511177 | orchestrator | 2026-04-01 00:35:05.511188 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-04-01 00:35:05.511200 | orchestrator | Wednesday 01 April 2026 00:34:54 +0000 (0:00:00.650) 0:05:22.879 ******* 2026-04-01 00:35:05.511211 | orchestrator | ok: [testbed-manager] 2026-04-01 00:35:05.511222 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:35:05.511233 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:35:05.511244 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:35:05.511255 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:35:05.511266 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:35:05.511301 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:35:05.511312 | orchestrator | 2026-04-01 00:35:05.511324 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-04-01 00:35:05.511335 | orchestrator | Wednesday 01 April 2026 00:34:56 +0000 (0:00:02.087) 0:05:24.967 ******* 2026-04-01 00:35:05.511346 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:35:05.511357 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:35:05.511368 | orchestrator | changed: [testbed-manager] 2026-04-01 00:35:05.511378 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:35:05.511389 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:35:05.511400 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:35:05.511410 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:35:05.511421 | orchestrator | 2026-04-01 00:35:05.511432 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-04-01 00:35:05.511443 | orchestrator | Wednesday 01 April 2026 00:34:57 +0000 (0:00:00.778) 0:05:25.746 ******* 2026-04-01 00:35:05.511454 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:35:05.511465 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:35:05.511475 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:35:05.511486 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:35:05.511497 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:35:05.511508 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:35:05.511518 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:35:05.511529 | orchestrator | 2026-04-01 00:35:05.511540 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-04-01 00:35:05.511551 | orchestrator | Wednesday 01 April 2026 00:34:58 +0000 (0:00:00.285) 0:05:26.032 ******* 2026-04-01 00:35:05.511562 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:35:05.511573 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:35:05.511584 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:35:05.511594 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:35:05.511605 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:35:05.511615 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:35:05.511728 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:35:05.511742 | orchestrator | 2026-04-01 00:35:05.511753 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-04-01 00:35:05.511764 | orchestrator | Wednesday 01 April 2026 00:34:58 +0000 (0:00:00.379) 0:05:26.411 ******* 2026-04-01 00:35:05.511775 | orchestrator | ok: [testbed-manager] 2026-04-01 00:35:05.511786 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:35:05.511813 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:35:05.511864 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:35:05.511887 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:35:05.511910 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:35:05.511922 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:35:05.511940 | orchestrator | 2026-04-01 00:35:05.511955 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-04-01 00:35:05.511967 | orchestrator | Wednesday 01 April 2026 00:34:58 +0000 (0:00:00.370) 0:05:26.782 ******* 2026-04-01 00:35:05.511978 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:35:05.511988 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:35:05.511999 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:35:05.512010 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:35:05.512021 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:35:05.512031 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:35:05.512042 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:35:05.512053 | orchestrator | 2026-04-01 00:35:05.512064 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-04-01 00:35:05.512076 | orchestrator | Wednesday 01 April 2026 00:34:59 +0000 (0:00:00.251) 0:05:27.033 ******* 2026-04-01 00:35:05.512087 | orchestrator | ok: [testbed-manager] 2026-04-01 00:35:05.512098 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:35:05.512108 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:35:05.512129 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:35:05.512140 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:35:05.512151 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:35:05.512161 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:35:05.512172 | orchestrator | 2026-04-01 00:35:05.512183 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-04-01 00:35:05.512194 | orchestrator | Wednesday 01 April 2026 00:34:59 +0000 (0:00:00.301) 0:05:27.334 ******* 2026-04-01 00:35:05.512205 | orchestrator | ok: [testbed-manager] =>  2026-04-01 00:35:05.512216 | orchestrator |  docker_version: 5:27.5.1 2026-04-01 00:35:05.512226 | orchestrator | ok: [testbed-node-0] =>  2026-04-01 00:35:05.512237 | orchestrator |  docker_version: 5:27.5.1 2026-04-01 00:35:05.512248 | orchestrator | ok: [testbed-node-1] =>  2026-04-01 00:35:05.512259 | orchestrator |  docker_version: 5:27.5.1 2026-04-01 00:35:05.512269 | orchestrator | ok: [testbed-node-2] =>  2026-04-01 00:35:05.512280 | orchestrator |  docker_version: 5:27.5.1 2026-04-01 00:35:05.512312 | orchestrator | ok: [testbed-node-3] =>  2026-04-01 00:35:05.512324 | orchestrator |  docker_version: 5:27.5.1 2026-04-01 00:35:05.512335 | orchestrator | ok: [testbed-node-4] =>  2026-04-01 00:35:05.512346 | orchestrator |  docker_version: 5:27.5.1 2026-04-01 00:35:05.512356 | orchestrator | ok: [testbed-node-5] =>  2026-04-01 00:35:05.512367 | orchestrator |  docker_version: 5:27.5.1 2026-04-01 00:35:05.512378 | orchestrator | 2026-04-01 00:35:05.512389 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-04-01 00:35:05.512399 | orchestrator | Wednesday 01 April 2026 00:34:59 +0000 (0:00:00.289) 0:05:27.623 ******* 2026-04-01 00:35:05.512410 | orchestrator | ok: [testbed-manager] =>  2026-04-01 00:35:05.512421 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-01 00:35:05.512432 | orchestrator | ok: [testbed-node-0] =>  2026-04-01 00:35:05.512442 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-01 00:35:05.512453 | orchestrator | ok: [testbed-node-1] =>  2026-04-01 00:35:05.512464 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-01 00:35:05.512474 | orchestrator | ok: [testbed-node-2] =>  2026-04-01 00:35:05.512485 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-01 00:35:05.512496 | orchestrator | ok: [testbed-node-3] =>  2026-04-01 00:35:05.512506 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-01 00:35:05.512517 | orchestrator | ok: [testbed-node-4] =>  2026-04-01 00:35:05.512528 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-01 00:35:05.512539 | orchestrator | ok: [testbed-node-5] =>  2026-04-01 00:35:05.512549 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-01 00:35:05.512560 | orchestrator | 2026-04-01 00:35:05.512571 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-04-01 00:35:05.512582 | orchestrator | Wednesday 01 April 2026 00:34:59 +0000 (0:00:00.300) 0:05:27.924 ******* 2026-04-01 00:35:05.512592 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:35:05.512603 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:35:05.512614 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:35:05.512645 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:35:05.512657 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:35:05.512668 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:35:05.512679 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:35:05.512689 | orchestrator | 2026-04-01 00:35:05.512700 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-04-01 00:35:05.512711 | orchestrator | Wednesday 01 April 2026 00:35:00 +0000 (0:00:00.239) 0:05:28.163 ******* 2026-04-01 00:35:05.512722 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:35:05.512733 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:35:05.512744 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:35:05.512754 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:35:05.512765 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:35:05.512776 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:35:05.512787 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:35:05.512797 | orchestrator | 2026-04-01 00:35:05.512808 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-04-01 00:35:05.512825 | orchestrator | Wednesday 01 April 2026 00:35:00 +0000 (0:00:00.263) 0:05:28.427 ******* 2026-04-01 00:35:05.512838 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:35:05.512851 | orchestrator | 2026-04-01 00:35:05.512862 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-04-01 00:35:05.512873 | orchestrator | Wednesday 01 April 2026 00:35:00 +0000 (0:00:00.396) 0:05:28.823 ******* 2026-04-01 00:35:05.512884 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:35:05.512895 | orchestrator | ok: [testbed-manager] 2026-04-01 00:35:05.512905 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:35:05.512916 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:35:05.512927 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:35:05.512938 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:35:05.512948 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:35:05.512959 | orchestrator | 2026-04-01 00:35:05.512970 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-04-01 00:35:05.512981 | orchestrator | Wednesday 01 April 2026 00:35:01 +0000 (0:00:00.858) 0:05:29.682 ******* 2026-04-01 00:35:05.512992 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:35:05.513003 | orchestrator | ok: [testbed-manager] 2026-04-01 00:35:05.513013 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:35:05.513024 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:35:05.513035 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:35:05.513046 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:35:05.513056 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:35:05.513067 | orchestrator | 2026-04-01 00:35:05.513078 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-04-01 00:35:05.513090 | orchestrator | Wednesday 01 April 2026 00:35:05 +0000 (0:00:03.456) 0:05:33.139 ******* 2026-04-01 00:35:05.513101 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-04-01 00:35:05.513112 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-04-01 00:35:05.513166 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-04-01 00:35:05.513178 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:35:05.513189 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-04-01 00:35:05.513200 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-04-01 00:35:05.513211 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-04-01 00:35:05.513222 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:35:05.513233 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-04-01 00:35:05.513244 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-04-01 00:35:05.513254 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-04-01 00:35:05.513265 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-04-01 00:35:05.513276 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-04-01 00:35:05.513286 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-04-01 00:35:05.513297 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:35:05.513308 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-04-01 00:35:05.513326 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-04-01 00:36:11.101483 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-04-01 00:36:11.101659 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:36:11.101679 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-04-01 00:36:11.101692 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-04-01 00:36:11.101704 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-04-01 00:36:11.101715 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:36:11.101726 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:36:11.101762 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-04-01 00:36:11.101774 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-04-01 00:36:11.101785 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-04-01 00:36:11.101796 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:36:11.101808 | orchestrator | 2026-04-01 00:36:11.101821 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-04-01 00:36:11.101832 | orchestrator | Wednesday 01 April 2026 00:35:05 +0000 (0:00:00.551) 0:05:33.691 ******* 2026-04-01 00:36:11.101843 | orchestrator | ok: [testbed-manager] 2026-04-01 00:36:11.101855 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:36:11.101865 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:36:11.101876 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:36:11.101887 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:36:11.101898 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:36:11.101909 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:36:11.101920 | orchestrator | 2026-04-01 00:36:11.101931 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-04-01 00:36:11.101942 | orchestrator | Wednesday 01 April 2026 00:35:13 +0000 (0:00:07.600) 0:05:41.291 ******* 2026-04-01 00:36:11.101953 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:36:11.101964 | orchestrator | ok: [testbed-manager] 2026-04-01 00:36:11.101975 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:36:11.101986 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:36:11.101997 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:36:11.102008 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:36:11.102077 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:36:11.102091 | orchestrator | 2026-04-01 00:36:11.102104 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-04-01 00:36:11.102115 | orchestrator | Wednesday 01 April 2026 00:35:14 +0000 (0:00:01.076) 0:05:42.368 ******* 2026-04-01 00:36:11.102127 | orchestrator | ok: [testbed-manager] 2026-04-01 00:36:11.102138 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:36:11.102148 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:36:11.102159 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:36:11.102170 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:36:11.102181 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:36:11.102192 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:36:11.102203 | orchestrator | 2026-04-01 00:36:11.102214 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-04-01 00:36:11.102225 | orchestrator | Wednesday 01 April 2026 00:35:23 +0000 (0:00:09.198) 0:05:51.566 ******* 2026-04-01 00:36:11.102236 | orchestrator | changed: [testbed-manager] 2026-04-01 00:36:11.102247 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:36:11.102258 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:36:11.102269 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:36:11.102280 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:36:11.102291 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:36:11.102302 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:36:11.102313 | orchestrator | 2026-04-01 00:36:11.102324 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-04-01 00:36:11.102335 | orchestrator | Wednesday 01 April 2026 00:35:27 +0000 (0:00:03.800) 0:05:55.367 ******* 2026-04-01 00:36:11.102346 | orchestrator | ok: [testbed-manager] 2026-04-01 00:36:11.102357 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:36:11.102368 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:36:11.102379 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:36:11.102389 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:36:11.102429 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:36:11.102457 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:36:11.102475 | orchestrator | 2026-04-01 00:36:11.102493 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-04-01 00:36:11.102543 | orchestrator | Wednesday 01 April 2026 00:35:28 +0000 (0:00:01.304) 0:05:56.672 ******* 2026-04-01 00:36:11.102575 | orchestrator | ok: [testbed-manager] 2026-04-01 00:36:11.102594 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:36:11.102610 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:36:11.102627 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:36:11.102644 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:36:11.102661 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:36:11.102676 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:36:11.102693 | orchestrator | 2026-04-01 00:36:11.102712 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-04-01 00:36:11.102730 | orchestrator | Wednesday 01 April 2026 00:35:30 +0000 (0:00:01.325) 0:05:57.998 ******* 2026-04-01 00:36:11.102748 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:36:11.102766 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:36:11.102784 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:36:11.102801 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:36:11.102819 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:36:11.102836 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:36:11.102854 | orchestrator | changed: [testbed-manager] 2026-04-01 00:36:11.102872 | orchestrator | 2026-04-01 00:36:11.102890 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-04-01 00:36:11.102908 | orchestrator | Wednesday 01 April 2026 00:35:30 +0000 (0:00:00.564) 0:05:58.562 ******* 2026-04-01 00:36:11.102926 | orchestrator | ok: [testbed-manager] 2026-04-01 00:36:11.102945 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:36:11.102963 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:36:11.102981 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:36:11.102998 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:36:11.103017 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:36:11.103036 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:36:11.103055 | orchestrator | 2026-04-01 00:36:11.103073 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-04-01 00:36:11.103114 | orchestrator | Wednesday 01 April 2026 00:35:41 +0000 (0:00:10.479) 0:06:09.042 ******* 2026-04-01 00:36:11.103126 | orchestrator | changed: [testbed-manager] 2026-04-01 00:36:11.103138 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:36:11.103148 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:36:11.103159 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:36:11.103170 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:36:11.103181 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:36:11.103191 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:36:11.103202 | orchestrator | 2026-04-01 00:36:11.103213 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-04-01 00:36:11.103224 | orchestrator | Wednesday 01 April 2026 00:35:42 +0000 (0:00:00.969) 0:06:10.011 ******* 2026-04-01 00:36:11.103235 | orchestrator | ok: [testbed-manager] 2026-04-01 00:36:11.103246 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:36:11.103257 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:36:11.103267 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:36:11.103278 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:36:11.103289 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:36:11.103300 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:36:11.103311 | orchestrator | 2026-04-01 00:36:11.103321 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-04-01 00:36:11.103332 | orchestrator | Wednesday 01 April 2026 00:35:52 +0000 (0:00:10.417) 0:06:20.429 ******* 2026-04-01 00:36:11.103347 | orchestrator | ok: [testbed-manager] 2026-04-01 00:36:11.103365 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:36:11.103382 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:36:11.103400 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:36:11.103419 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:36:11.103437 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:36:11.103456 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:36:11.103489 | orchestrator | 2026-04-01 00:36:11.103534 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-04-01 00:36:11.103546 | orchestrator | Wednesday 01 April 2026 00:36:04 +0000 (0:00:11.935) 0:06:32.365 ******* 2026-04-01 00:36:11.103557 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-04-01 00:36:11.103568 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-04-01 00:36:11.103579 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-04-01 00:36:11.103590 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-04-01 00:36:11.103601 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-04-01 00:36:11.103611 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-04-01 00:36:11.103622 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-04-01 00:36:11.103633 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-04-01 00:36:11.103643 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-04-01 00:36:11.103654 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-04-01 00:36:11.103665 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-04-01 00:36:11.103676 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-04-01 00:36:11.103687 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-04-01 00:36:11.103697 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-04-01 00:36:11.103708 | orchestrator | 2026-04-01 00:36:11.103719 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-04-01 00:36:11.103730 | orchestrator | Wednesday 01 April 2026 00:36:05 +0000 (0:00:01.209) 0:06:33.574 ******* 2026-04-01 00:36:11.103740 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:36:11.103751 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:36:11.103762 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:36:11.103773 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:36:11.103783 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:36:11.103794 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:36:11.103805 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:36:11.103816 | orchestrator | 2026-04-01 00:36:11.103827 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-04-01 00:36:11.103838 | orchestrator | Wednesday 01 April 2026 00:36:06 +0000 (0:00:00.621) 0:06:34.196 ******* 2026-04-01 00:36:11.103849 | orchestrator | ok: [testbed-manager] 2026-04-01 00:36:11.103860 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:36:11.103871 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:36:11.103881 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:36:11.103892 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:36:11.103902 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:36:11.103913 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:36:11.103924 | orchestrator | 2026-04-01 00:36:11.103935 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-04-01 00:36:11.103947 | orchestrator | Wednesday 01 April 2026 00:36:10 +0000 (0:00:04.071) 0:06:38.268 ******* 2026-04-01 00:36:11.103958 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:36:11.103968 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:36:11.103982 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:36:11.103999 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:36:11.104019 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:36:11.104036 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:36:11.104053 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:36:11.104071 | orchestrator | 2026-04-01 00:36:11.104091 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-04-01 00:36:11.104111 | orchestrator | Wednesday 01 April 2026 00:36:10 +0000 (0:00:00.477) 0:06:38.745 ******* 2026-04-01 00:36:11.104128 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-04-01 00:36:11.104143 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-04-01 00:36:11.104154 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:36:11.104172 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-04-01 00:36:11.104183 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-04-01 00:36:11.104194 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:36:11.104205 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-04-01 00:36:11.104216 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-04-01 00:36:11.104227 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:36:11.104248 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-04-01 00:36:30.362163 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-04-01 00:36:30.362244 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:36:30.362252 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-04-01 00:36:30.362258 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-04-01 00:36:30.362264 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:36:30.362269 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-04-01 00:36:30.362275 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-04-01 00:36:30.362280 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:36:30.362285 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-04-01 00:36:30.362291 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-04-01 00:36:30.362296 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:36:30.362302 | orchestrator | 2026-04-01 00:36:30.362308 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-04-01 00:36:30.362315 | orchestrator | Wednesday 01 April 2026 00:36:11 +0000 (0:00:00.595) 0:06:39.341 ******* 2026-04-01 00:36:30.362320 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:36:30.362326 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:36:30.362331 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:36:30.362336 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:36:30.362341 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:36:30.362347 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:36:30.362352 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:36:30.362357 | orchestrator | 2026-04-01 00:36:30.362362 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-04-01 00:36:30.362368 | orchestrator | Wednesday 01 April 2026 00:36:11 +0000 (0:00:00.465) 0:06:39.807 ******* 2026-04-01 00:36:30.362373 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:36:30.362378 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:36:30.362383 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:36:30.362388 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:36:30.362393 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:36:30.362399 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:36:30.362404 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:36:30.362409 | orchestrator | 2026-04-01 00:36:30.362414 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-04-01 00:36:30.362419 | orchestrator | Wednesday 01 April 2026 00:36:12 +0000 (0:00:00.639) 0:06:40.446 ******* 2026-04-01 00:36:30.362424 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:36:30.362430 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:36:30.362435 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:36:30.362440 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:36:30.362445 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:36:30.362450 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:36:30.362455 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:36:30.362524 | orchestrator | 2026-04-01 00:36:30.362537 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-04-01 00:36:30.362545 | orchestrator | Wednesday 01 April 2026 00:36:12 +0000 (0:00:00.505) 0:06:40.952 ******* 2026-04-01 00:36:30.362553 | orchestrator | ok: [testbed-manager] 2026-04-01 00:36:30.362559 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:36:30.362589 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:36:30.362594 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:36:30.362599 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:36:30.362604 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:36:30.362610 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:36:30.362615 | orchestrator | 2026-04-01 00:36:30.362620 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-04-01 00:36:30.362626 | orchestrator | Wednesday 01 April 2026 00:36:14 +0000 (0:00:01.860) 0:06:42.812 ******* 2026-04-01 00:36:30.362643 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:36:30.362650 | orchestrator | 2026-04-01 00:36:30.362655 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-04-01 00:36:30.362661 | orchestrator | Wednesday 01 April 2026 00:36:15 +0000 (0:00:00.827) 0:06:43.640 ******* 2026-04-01 00:36:30.362667 | orchestrator | ok: [testbed-manager] 2026-04-01 00:36:30.362675 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:36:30.362684 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:36:30.362691 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:36:30.362699 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:36:30.362705 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:36:30.362711 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:36:30.362717 | orchestrator | 2026-04-01 00:36:30.362723 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-04-01 00:36:30.362729 | orchestrator | Wednesday 01 April 2026 00:36:16 +0000 (0:00:01.000) 0:06:44.640 ******* 2026-04-01 00:36:30.362735 | orchestrator | ok: [testbed-manager] 2026-04-01 00:36:30.362740 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:36:30.362746 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:36:30.362752 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:36:30.362758 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:36:30.362764 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:36:30.362769 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:36:30.362775 | orchestrator | 2026-04-01 00:36:30.362781 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-04-01 00:36:30.362787 | orchestrator | Wednesday 01 April 2026 00:36:17 +0000 (0:00:00.843) 0:06:45.484 ******* 2026-04-01 00:36:30.362794 | orchestrator | ok: [testbed-manager] 2026-04-01 00:36:30.362802 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:36:30.362810 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:36:30.362818 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:36:30.362827 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:36:30.362836 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:36:30.362844 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:36:30.362852 | orchestrator | 2026-04-01 00:36:30.362861 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-04-01 00:36:30.362886 | orchestrator | Wednesday 01 April 2026 00:36:18 +0000 (0:00:01.380) 0:06:46.865 ******* 2026-04-01 00:36:30.362895 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:36:30.362903 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:36:30.362911 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:36:30.362920 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:36:30.362929 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:36:30.362938 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:36:30.362947 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:36:30.362955 | orchestrator | 2026-04-01 00:36:30.362963 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-04-01 00:36:30.362985 | orchestrator | Wednesday 01 April 2026 00:36:20 +0000 (0:00:01.386) 0:06:48.251 ******* 2026-04-01 00:36:30.362995 | orchestrator | ok: [testbed-manager] 2026-04-01 00:36:30.363004 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:36:30.363011 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:36:30.363027 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:36:30.363033 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:36:30.363038 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:36:30.363043 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:36:30.363048 | orchestrator | 2026-04-01 00:36:30.363053 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-04-01 00:36:30.363059 | orchestrator | Wednesday 01 April 2026 00:36:21 +0000 (0:00:01.408) 0:06:49.660 ******* 2026-04-01 00:36:30.363064 | orchestrator | changed: [testbed-manager] 2026-04-01 00:36:30.363069 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:36:30.363074 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:36:30.363079 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:36:30.363084 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:36:30.363089 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:36:30.363094 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:36:30.363099 | orchestrator | 2026-04-01 00:36:30.363104 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-04-01 00:36:30.363110 | orchestrator | Wednesday 01 April 2026 00:36:23 +0000 (0:00:01.618) 0:06:51.278 ******* 2026-04-01 00:36:30.363115 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:36:30.363121 | orchestrator | 2026-04-01 00:36:30.363126 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-04-01 00:36:30.363133 | orchestrator | Wednesday 01 April 2026 00:36:24 +0000 (0:00:00.845) 0:06:52.123 ******* 2026-04-01 00:36:30.363141 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:36:30.363150 | orchestrator | ok: [testbed-manager] 2026-04-01 00:36:30.363159 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:36:30.363167 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:36:30.363175 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:36:30.363183 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:36:30.363190 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:36:30.363198 | orchestrator | 2026-04-01 00:36:30.363207 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-04-01 00:36:30.363215 | orchestrator | Wednesday 01 April 2026 00:36:25 +0000 (0:00:01.361) 0:06:53.484 ******* 2026-04-01 00:36:30.363224 | orchestrator | ok: [testbed-manager] 2026-04-01 00:36:30.363231 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:36:30.363238 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:36:30.363246 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:36:30.363254 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:36:30.363262 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:36:30.363270 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:36:30.363280 | orchestrator | 2026-04-01 00:36:30.363287 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-04-01 00:36:30.363294 | orchestrator | Wednesday 01 April 2026 00:36:26 +0000 (0:00:01.388) 0:06:54.873 ******* 2026-04-01 00:36:30.363301 | orchestrator | ok: [testbed-manager] 2026-04-01 00:36:30.363309 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:36:30.363316 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:36:30.363324 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:36:30.363331 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:36:30.363344 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:36:30.363351 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:36:30.363359 | orchestrator | 2026-04-01 00:36:30.363366 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-04-01 00:36:30.363374 | orchestrator | Wednesday 01 April 2026 00:36:28 +0000 (0:00:01.143) 0:06:56.016 ******* 2026-04-01 00:36:30.363381 | orchestrator | ok: [testbed-manager] 2026-04-01 00:36:30.363389 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:36:30.363397 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:36:30.363405 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:36:30.363414 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:36:30.363429 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:36:30.363437 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:36:30.363442 | orchestrator | 2026-04-01 00:36:30.363447 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-04-01 00:36:30.363452 | orchestrator | Wednesday 01 April 2026 00:36:29 +0000 (0:00:01.138) 0:06:57.154 ******* 2026-04-01 00:36:30.363458 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:36:30.363481 | orchestrator | 2026-04-01 00:36:30.363486 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-01 00:36:30.363491 | orchestrator | Wednesday 01 April 2026 00:36:30 +0000 (0:00:00.891) 0:06:58.046 ******* 2026-04-01 00:36:30.363497 | orchestrator | 2026-04-01 00:36:30.363502 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-01 00:36:30.363507 | orchestrator | Wednesday 01 April 2026 00:36:30 +0000 (0:00:00.046) 0:06:58.093 ******* 2026-04-01 00:36:30.363512 | orchestrator | 2026-04-01 00:36:30.363517 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-01 00:36:30.363522 | orchestrator | Wednesday 01 April 2026 00:36:30 +0000 (0:00:00.194) 0:06:58.287 ******* 2026-04-01 00:36:30.363528 | orchestrator | 2026-04-01 00:36:30.363533 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-01 00:36:30.363545 | orchestrator | Wednesday 01 April 2026 00:36:30 +0000 (0:00:00.040) 0:06:58.328 ******* 2026-04-01 00:36:56.404626 | orchestrator | 2026-04-01 00:36:56.405695 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-01 00:36:56.405758 | orchestrator | Wednesday 01 April 2026 00:36:30 +0000 (0:00:00.040) 0:06:58.368 ******* 2026-04-01 00:36:56.405780 | orchestrator | 2026-04-01 00:36:56.405792 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-01 00:36:56.405804 | orchestrator | Wednesday 01 April 2026 00:36:30 +0000 (0:00:00.047) 0:06:58.415 ******* 2026-04-01 00:36:56.405816 | orchestrator | 2026-04-01 00:36:56.405827 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-01 00:36:56.405838 | orchestrator | Wednesday 01 April 2026 00:36:30 +0000 (0:00:00.040) 0:06:58.455 ******* 2026-04-01 00:36:56.405849 | orchestrator | 2026-04-01 00:36:56.405860 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-04-01 00:36:56.405871 | orchestrator | Wednesday 01 April 2026 00:36:30 +0000 (0:00:00.054) 0:06:58.509 ******* 2026-04-01 00:36:56.405883 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:36:56.405899 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:36:56.405924 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:36:56.405950 | orchestrator | 2026-04-01 00:36:56.405967 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-04-01 00:36:56.405983 | orchestrator | Wednesday 01 April 2026 00:36:31 +0000 (0:00:01.269) 0:06:59.779 ******* 2026-04-01 00:36:56.406000 | orchestrator | changed: [testbed-manager] 2026-04-01 00:36:56.406086 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:36:56.406107 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:36:56.406121 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:36:56.406132 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:36:56.406143 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:36:56.406154 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:36:56.406165 | orchestrator | 2026-04-01 00:36:56.406176 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-04-01 00:36:56.406187 | orchestrator | Wednesday 01 April 2026 00:36:33 +0000 (0:00:01.378) 0:07:01.158 ******* 2026-04-01 00:36:56.406198 | orchestrator | changed: [testbed-manager] 2026-04-01 00:36:56.406209 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:36:56.406220 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:36:56.406231 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:36:56.406242 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:36:56.406288 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:36:56.406300 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:36:56.406310 | orchestrator | 2026-04-01 00:36:56.406322 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-04-01 00:36:56.406333 | orchestrator | Wednesday 01 April 2026 00:36:34 +0000 (0:00:01.296) 0:07:02.455 ******* 2026-04-01 00:36:56.406344 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:36:56.406355 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:36:56.406365 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:36:56.406376 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:36:56.406387 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:36:56.406398 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:36:56.406409 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:36:56.406476 | orchestrator | 2026-04-01 00:36:56.406489 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-04-01 00:36:56.406501 | orchestrator | Wednesday 01 April 2026 00:36:36 +0000 (0:00:02.420) 0:07:04.876 ******* 2026-04-01 00:36:56.406511 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:36:56.406522 | orchestrator | 2026-04-01 00:36:56.406533 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-04-01 00:36:56.406545 | orchestrator | Wednesday 01 April 2026 00:36:36 +0000 (0:00:00.088) 0:07:04.965 ******* 2026-04-01 00:36:56.406555 | orchestrator | ok: [testbed-manager] 2026-04-01 00:36:56.406567 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:36:56.406577 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:36:56.406588 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:36:56.406599 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:36:56.406609 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:36:56.406620 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:36:56.406631 | orchestrator | 2026-04-01 00:36:56.406642 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-04-01 00:36:56.406655 | orchestrator | Wednesday 01 April 2026 00:36:38 +0000 (0:00:01.275) 0:07:06.240 ******* 2026-04-01 00:36:56.406665 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:36:56.406676 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:36:56.406687 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:36:56.406701 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:36:56.406721 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:36:56.406739 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:36:56.406758 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:36:56.406778 | orchestrator | 2026-04-01 00:36:56.406798 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-04-01 00:36:56.406815 | orchestrator | Wednesday 01 April 2026 00:36:38 +0000 (0:00:00.512) 0:07:06.753 ******* 2026-04-01 00:36:56.406862 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:36:56.406886 | orchestrator | 2026-04-01 00:36:56.406966 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-04-01 00:36:56.406981 | orchestrator | Wednesday 01 April 2026 00:36:39 +0000 (0:00:00.858) 0:07:07.611 ******* 2026-04-01 00:36:56.406992 | orchestrator | ok: [testbed-manager] 2026-04-01 00:36:56.407004 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:36:56.407015 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:36:56.407026 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:36:56.407037 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:36:56.407047 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:36:56.407058 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:36:56.407069 | orchestrator | 2026-04-01 00:36:56.407080 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-04-01 00:36:56.407091 | orchestrator | Wednesday 01 April 2026 00:36:40 +0000 (0:00:00.881) 0:07:08.493 ******* 2026-04-01 00:36:56.407102 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-04-01 00:36:56.407157 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-04-01 00:36:56.407170 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-04-01 00:36:56.407218 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-04-01 00:36:56.407231 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-04-01 00:36:56.407242 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-04-01 00:36:56.407253 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-04-01 00:36:56.407264 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-04-01 00:36:56.407274 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-04-01 00:36:56.407285 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-04-01 00:36:56.407296 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-04-01 00:36:56.407307 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-04-01 00:36:56.407318 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-04-01 00:36:56.407329 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-04-01 00:36:56.407340 | orchestrator | 2026-04-01 00:36:56.407350 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-04-01 00:36:56.407361 | orchestrator | Wednesday 01 April 2026 00:36:42 +0000 (0:00:02.457) 0:07:10.951 ******* 2026-04-01 00:36:56.407372 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:36:56.407383 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:36:56.407394 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:36:56.407405 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:36:56.407415 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:36:56.407448 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:36:56.407459 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:36:56.407469 | orchestrator | 2026-04-01 00:36:56.407481 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-04-01 00:36:56.407491 | orchestrator | Wednesday 01 April 2026 00:36:43 +0000 (0:00:00.416) 0:07:11.367 ******* 2026-04-01 00:36:56.407505 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:36:56.407519 | orchestrator | 2026-04-01 00:36:56.407530 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-04-01 00:36:56.407541 | orchestrator | Wednesday 01 April 2026 00:36:44 +0000 (0:00:00.783) 0:07:12.151 ******* 2026-04-01 00:36:56.407552 | orchestrator | ok: [testbed-manager] 2026-04-01 00:36:56.407563 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:36:56.407574 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:36:56.407584 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:36:56.407595 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:36:56.407606 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:36:56.407617 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:36:56.407627 | orchestrator | 2026-04-01 00:36:56.407639 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-04-01 00:36:56.407649 | orchestrator | Wednesday 01 April 2026 00:36:44 +0000 (0:00:00.782) 0:07:12.934 ******* 2026-04-01 00:36:56.407661 | orchestrator | ok: [testbed-manager] 2026-04-01 00:36:56.407671 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:36:56.407682 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:36:56.407693 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:36:56.407704 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:36:56.407714 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:36:56.407725 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:36:56.407736 | orchestrator | 2026-04-01 00:36:56.407747 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-04-01 00:36:56.407758 | orchestrator | Wednesday 01 April 2026 00:36:45 +0000 (0:00:00.734) 0:07:13.668 ******* 2026-04-01 00:36:56.407785 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:36:56.407796 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:36:56.407807 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:36:56.407818 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:36:56.407829 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:36:56.407839 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:36:56.407850 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:36:56.407861 | orchestrator | 2026-04-01 00:36:56.407872 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-04-01 00:36:56.407883 | orchestrator | Wednesday 01 April 2026 00:36:46 +0000 (0:00:00.405) 0:07:14.074 ******* 2026-04-01 00:36:56.407893 | orchestrator | ok: [testbed-manager] 2026-04-01 00:36:56.407904 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:36:56.407915 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:36:56.407926 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:36:56.407937 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:36:56.407947 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:36:56.407958 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:36:56.407969 | orchestrator | 2026-04-01 00:36:56.407980 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-04-01 00:36:56.408000 | orchestrator | Wednesday 01 April 2026 00:36:47 +0000 (0:00:01.524) 0:07:15.598 ******* 2026-04-01 00:36:56.408019 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:36:56.408039 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:36:56.408059 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:36:56.408079 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:36:56.408152 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:36:56.408164 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:36:56.408174 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:36:56.408185 | orchestrator | 2026-04-01 00:36:56.408196 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-04-01 00:36:56.408207 | orchestrator | Wednesday 01 April 2026 00:36:48 +0000 (0:00:00.538) 0:07:16.137 ******* 2026-04-01 00:36:56.408218 | orchestrator | ok: [testbed-manager] 2026-04-01 00:36:56.408229 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:36:56.408240 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:36:56.408251 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:36:56.408262 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:36:56.408273 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:36:56.408294 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:37:29.049028 | orchestrator | 2026-04-01 00:37:29.049127 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-04-01 00:37:29.049140 | orchestrator | Wednesday 01 April 2026 00:36:56 +0000 (0:00:08.302) 0:07:24.440 ******* 2026-04-01 00:37:29.049149 | orchestrator | ok: [testbed-manager] 2026-04-01 00:37:29.049160 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:37:29.049169 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:37:29.049177 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:37:29.049185 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:37:29.049194 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:37:29.049202 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:37:29.049210 | orchestrator | 2026-04-01 00:37:29.049218 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-04-01 00:37:29.049226 | orchestrator | Wednesday 01 April 2026 00:36:57 +0000 (0:00:01.308) 0:07:25.749 ******* 2026-04-01 00:37:29.049234 | orchestrator | ok: [testbed-manager] 2026-04-01 00:37:29.049242 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:37:29.049251 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:37:29.049259 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:37:29.049267 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:37:29.049275 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:37:29.049283 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:37:29.049291 | orchestrator | 2026-04-01 00:37:29.049299 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-04-01 00:37:29.049328 | orchestrator | Wednesday 01 April 2026 00:36:59 +0000 (0:00:01.776) 0:07:27.526 ******* 2026-04-01 00:37:29.049338 | orchestrator | ok: [testbed-manager] 2026-04-01 00:37:29.049346 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:37:29.049354 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:37:29.049362 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:37:29.049423 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:37:29.049432 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:37:29.049438 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:37:29.049443 | orchestrator | 2026-04-01 00:37:29.049448 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-01 00:37:29.049453 | orchestrator | Wednesday 01 April 2026 00:37:01 +0000 (0:00:01.904) 0:07:29.430 ******* 2026-04-01 00:37:29.049458 | orchestrator | ok: [testbed-manager] 2026-04-01 00:37:29.049463 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:37:29.049467 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:37:29.049472 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:37:29.049478 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:37:29.049483 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:37:29.049487 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:37:29.049492 | orchestrator | 2026-04-01 00:37:29.049497 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-01 00:37:29.049502 | orchestrator | Wednesday 01 April 2026 00:37:02 +0000 (0:00:00.892) 0:07:30.322 ******* 2026-04-01 00:37:29.049507 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:37:29.049512 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:37:29.049516 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:37:29.049521 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:37:29.049526 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:37:29.049531 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:37:29.049536 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:37:29.049540 | orchestrator | 2026-04-01 00:37:29.049545 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-04-01 00:37:29.049550 | orchestrator | Wednesday 01 April 2026 00:37:03 +0000 (0:00:00.819) 0:07:31.141 ******* 2026-04-01 00:37:29.049555 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:37:29.049560 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:37:29.049566 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:37:29.049572 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:37:29.049577 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:37:29.049583 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:37:29.049589 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:37:29.049594 | orchestrator | 2026-04-01 00:37:29.049600 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-04-01 00:37:29.049617 | orchestrator | Wednesday 01 April 2026 00:37:03 +0000 (0:00:00.667) 0:07:31.808 ******* 2026-04-01 00:37:29.049623 | orchestrator | ok: [testbed-manager] 2026-04-01 00:37:29.049628 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:37:29.049635 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:37:29.049643 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:37:29.049651 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:37:29.049658 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:37:29.049666 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:37:29.049674 | orchestrator | 2026-04-01 00:37:29.049681 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-04-01 00:37:29.049688 | orchestrator | Wednesday 01 April 2026 00:37:04 +0000 (0:00:00.576) 0:07:32.385 ******* 2026-04-01 00:37:29.049695 | orchestrator | ok: [testbed-manager] 2026-04-01 00:37:29.049702 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:37:29.049709 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:37:29.049724 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:37:29.049737 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:37:29.049745 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:37:29.049752 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:37:29.049767 | orchestrator | 2026-04-01 00:37:29.049773 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-04-01 00:37:29.049781 | orchestrator | Wednesday 01 April 2026 00:37:04 +0000 (0:00:00.511) 0:07:32.896 ******* 2026-04-01 00:37:29.049788 | orchestrator | ok: [testbed-manager] 2026-04-01 00:37:29.049796 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:37:29.049802 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:37:29.049810 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:37:29.049818 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:37:29.049825 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:37:29.049832 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:37:29.049839 | orchestrator | 2026-04-01 00:37:29.049847 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-04-01 00:37:29.049854 | orchestrator | Wednesday 01 April 2026 00:37:05 +0000 (0:00:00.536) 0:07:33.432 ******* 2026-04-01 00:37:29.049861 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:37:29.049868 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:37:29.049875 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:37:29.049883 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:37:29.049892 | orchestrator | ok: [testbed-manager] 2026-04-01 00:37:29.049900 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:37:29.049907 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:37:29.049915 | orchestrator | 2026-04-01 00:37:29.049940 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-04-01 00:37:29.049950 | orchestrator | Wednesday 01 April 2026 00:37:10 +0000 (0:00:04.762) 0:07:38.195 ******* 2026-04-01 00:37:29.049957 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:37:29.049966 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:37:29.049974 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:37:29.049982 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:37:29.049990 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:37:29.049997 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:37:29.050006 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:37:29.050011 | orchestrator | 2026-04-01 00:37:29.050061 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-04-01 00:37:29.050067 | orchestrator | Wednesday 01 April 2026 00:37:10 +0000 (0:00:00.696) 0:07:38.892 ******* 2026-04-01 00:37:29.050073 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:37:29.050081 | orchestrator | 2026-04-01 00:37:29.050086 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-04-01 00:37:29.050091 | orchestrator | Wednesday 01 April 2026 00:37:11 +0000 (0:00:00.754) 0:07:39.646 ******* 2026-04-01 00:37:29.050096 | orchestrator | ok: [testbed-manager] 2026-04-01 00:37:29.050101 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:37:29.050105 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:37:29.050110 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:37:29.050115 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:37:29.050120 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:37:29.050125 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:37:29.050129 | orchestrator | 2026-04-01 00:37:29.050134 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-04-01 00:37:29.050139 | orchestrator | Wednesday 01 April 2026 00:37:13 +0000 (0:00:02.188) 0:07:41.835 ******* 2026-04-01 00:37:29.050144 | orchestrator | ok: [testbed-manager] 2026-04-01 00:37:29.050149 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:37:29.050154 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:37:29.050158 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:37:29.050163 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:37:29.050168 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:37:29.050172 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:37:29.050177 | orchestrator | 2026-04-01 00:37:29.050182 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-04-01 00:37:29.050218 | orchestrator | Wednesday 01 April 2026 00:37:15 +0000 (0:00:01.405) 0:07:43.240 ******* 2026-04-01 00:37:29.050224 | orchestrator | ok: [testbed-manager] 2026-04-01 00:37:29.050229 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:37:29.050234 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:37:29.050238 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:37:29.050243 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:37:29.050248 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:37:29.050253 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:37:29.050258 | orchestrator | 2026-04-01 00:37:29.050262 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-04-01 00:37:29.050267 | orchestrator | Wednesday 01 April 2026 00:37:16 +0000 (0:00:00.825) 0:07:44.065 ******* 2026-04-01 00:37:29.050272 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-01 00:37:29.050279 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-01 00:37:29.050284 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-01 00:37:29.050294 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-01 00:37:29.050299 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-01 00:37:29.050304 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-01 00:37:29.050309 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-01 00:37:29.050314 | orchestrator | 2026-04-01 00:37:29.050319 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-04-01 00:37:29.050324 | orchestrator | Wednesday 01 April 2026 00:37:17 +0000 (0:00:01.716) 0:07:45.782 ******* 2026-04-01 00:37:29.050329 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:37:29.050334 | orchestrator | 2026-04-01 00:37:29.050339 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-04-01 00:37:29.050344 | orchestrator | Wednesday 01 April 2026 00:37:18 +0000 (0:00:00.948) 0:07:46.730 ******* 2026-04-01 00:37:29.050349 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:37:29.050353 | orchestrator | changed: [testbed-manager] 2026-04-01 00:37:29.050358 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:37:29.050385 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:37:29.050391 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:37:29.050396 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:37:29.050401 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:37:29.050406 | orchestrator | 2026-04-01 00:37:29.050417 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-04-01 00:38:00.907074 | orchestrator | Wednesday 01 April 2026 00:37:29 +0000 (0:00:10.285) 0:07:57.015 ******* 2026-04-01 00:38:00.907183 | orchestrator | ok: [testbed-manager] 2026-04-01 00:38:00.907200 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:38:00.907212 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:38:00.907223 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:38:00.907234 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:38:00.907245 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:38:00.907256 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:38:00.907267 | orchestrator | 2026-04-01 00:38:00.907279 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-04-01 00:38:00.907366 | orchestrator | Wednesday 01 April 2026 00:37:30 +0000 (0:00:01.792) 0:07:58.808 ******* 2026-04-01 00:38:00.907379 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:38:00.907391 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:38:00.907401 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:38:00.907412 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:38:00.907423 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:38:00.907433 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:38:00.907444 | orchestrator | 2026-04-01 00:38:00.907455 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-04-01 00:38:00.907467 | orchestrator | Wednesday 01 April 2026 00:37:32 +0000 (0:00:01.528) 0:08:00.336 ******* 2026-04-01 00:38:00.907478 | orchestrator | changed: [testbed-manager] 2026-04-01 00:38:00.907490 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:38:00.907500 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:38:00.907511 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:38:00.907522 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:38:00.907533 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:38:00.907544 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:38:00.907555 | orchestrator | 2026-04-01 00:38:00.907566 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-04-01 00:38:00.907576 | orchestrator | 2026-04-01 00:38:00.907588 | orchestrator | TASK [Include hardening role] ************************************************** 2026-04-01 00:38:00.907598 | orchestrator | Wednesday 01 April 2026 00:37:34 +0000 (0:00:02.228) 0:08:02.565 ******* 2026-04-01 00:38:00.907609 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:38:00.907622 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:38:00.907636 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:38:00.907649 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:38:00.907662 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:38:00.907675 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:38:00.907687 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:38:00.907701 | orchestrator | 2026-04-01 00:38:00.907714 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-04-01 00:38:00.907727 | orchestrator | 2026-04-01 00:38:00.907740 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-04-01 00:38:00.907753 | orchestrator | Wednesday 01 April 2026 00:37:35 +0000 (0:00:00.485) 0:08:03.051 ******* 2026-04-01 00:38:00.907766 | orchestrator | changed: [testbed-manager] 2026-04-01 00:38:00.907778 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:38:00.907791 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:38:00.907804 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:38:00.907816 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:38:00.907829 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:38:00.907843 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:38:00.907854 | orchestrator | 2026-04-01 00:38:00.907865 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-04-01 00:38:00.907875 | orchestrator | Wednesday 01 April 2026 00:37:36 +0000 (0:00:01.364) 0:08:04.415 ******* 2026-04-01 00:38:00.907886 | orchestrator | ok: [testbed-manager] 2026-04-01 00:38:00.907897 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:38:00.907908 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:38:00.907918 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:38:00.907929 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:38:00.907940 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:38:00.907950 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:38:00.907961 | orchestrator | 2026-04-01 00:38:00.907972 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-04-01 00:38:00.907997 | orchestrator | Wednesday 01 April 2026 00:37:38 +0000 (0:00:01.584) 0:08:06.000 ******* 2026-04-01 00:38:00.908009 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:38:00.908020 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:38:00.908031 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:38:00.908043 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:38:00.908061 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:38:00.908073 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:38:00.908083 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:38:00.908094 | orchestrator | 2026-04-01 00:38:00.908106 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-04-01 00:38:00.908117 | orchestrator | Wednesday 01 April 2026 00:37:38 +0000 (0:00:00.457) 0:08:06.457 ******* 2026-04-01 00:38:00.908128 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:38:00.908141 | orchestrator | 2026-04-01 00:38:00.908152 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-04-01 00:38:00.908163 | orchestrator | Wednesday 01 April 2026 00:37:39 +0000 (0:00:00.774) 0:08:07.231 ******* 2026-04-01 00:38:00.908176 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:38:00.908189 | orchestrator | 2026-04-01 00:38:00.908200 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-04-01 00:38:00.908211 | orchestrator | Wednesday 01 April 2026 00:37:40 +0000 (0:00:00.770) 0:08:08.002 ******* 2026-04-01 00:38:00.908222 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:38:00.908233 | orchestrator | changed: [testbed-manager] 2026-04-01 00:38:00.908244 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:38:00.908254 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:38:00.908265 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:38:00.908276 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:38:00.908287 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:38:00.908298 | orchestrator | 2026-04-01 00:38:00.908347 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-04-01 00:38:00.908360 | orchestrator | Wednesday 01 April 2026 00:37:49 +0000 (0:00:09.957) 0:08:17.959 ******* 2026-04-01 00:38:00.908371 | orchestrator | changed: [testbed-manager] 2026-04-01 00:38:00.908382 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:38:00.908392 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:38:00.908403 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:38:00.908414 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:38:00.908425 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:38:00.908436 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:38:00.908447 | orchestrator | 2026-04-01 00:38:00.908458 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-04-01 00:38:00.908469 | orchestrator | Wednesday 01 April 2026 00:37:50 +0000 (0:00:00.771) 0:08:18.730 ******* 2026-04-01 00:38:00.908480 | orchestrator | changed: [testbed-manager] 2026-04-01 00:38:00.908491 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:38:00.908502 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:38:00.908512 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:38:00.908523 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:38:00.908534 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:38:00.908545 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:38:00.908556 | orchestrator | 2026-04-01 00:38:00.908567 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-04-01 00:38:00.908578 | orchestrator | Wednesday 01 April 2026 00:37:52 +0000 (0:00:01.262) 0:08:19.993 ******* 2026-04-01 00:38:00.908589 | orchestrator | changed: [testbed-manager] 2026-04-01 00:38:00.908600 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:38:00.908611 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:38:00.908621 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:38:00.908632 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:38:00.908643 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:38:00.908654 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:38:00.908669 | orchestrator | 2026-04-01 00:38:00.908687 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-04-01 00:38:00.908716 | orchestrator | Wednesday 01 April 2026 00:37:53 +0000 (0:00:01.719) 0:08:21.713 ******* 2026-04-01 00:38:00.908735 | orchestrator | changed: [testbed-manager] 2026-04-01 00:38:00.908755 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:38:00.908774 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:38:00.908795 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:38:00.908814 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:38:00.908834 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:38:00.908854 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:38:00.908873 | orchestrator | 2026-04-01 00:38:00.908893 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-04-01 00:38:00.908912 | orchestrator | Wednesday 01 April 2026 00:37:55 +0000 (0:00:01.287) 0:08:23.000 ******* 2026-04-01 00:38:00.908926 | orchestrator | changed: [testbed-manager] 2026-04-01 00:38:00.908936 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:38:00.908947 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:38:00.908958 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:38:00.908969 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:38:00.908979 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:38:00.908990 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:38:00.909001 | orchestrator | 2026-04-01 00:38:00.909012 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-04-01 00:38:00.909022 | orchestrator | 2026-04-01 00:38:00.909033 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-04-01 00:38:00.909044 | orchestrator | Wednesday 01 April 2026 00:37:56 +0000 (0:00:01.207) 0:08:24.208 ******* 2026-04-01 00:38:00.909055 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:38:00.909066 | orchestrator | 2026-04-01 00:38:00.909077 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-04-01 00:38:00.909088 | orchestrator | Wednesday 01 April 2026 00:37:57 +0000 (0:00:00.811) 0:08:25.020 ******* 2026-04-01 00:38:00.909099 | orchestrator | ok: [testbed-manager] 2026-04-01 00:38:00.909110 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:38:00.909120 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:38:00.909131 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:38:00.909142 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:38:00.909153 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:38:00.909163 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:38:00.909174 | orchestrator | 2026-04-01 00:38:00.909185 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-04-01 00:38:00.909196 | orchestrator | Wednesday 01 April 2026 00:37:57 +0000 (0:00:00.856) 0:08:25.876 ******* 2026-04-01 00:38:00.909207 | orchestrator | changed: [testbed-manager] 2026-04-01 00:38:00.909218 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:38:00.909228 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:38:00.909239 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:38:00.909250 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:38:00.909260 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:38:00.909271 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:38:00.909281 | orchestrator | 2026-04-01 00:38:00.909292 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-04-01 00:38:00.909303 | orchestrator | Wednesday 01 April 2026 00:37:59 +0000 (0:00:01.338) 0:08:27.215 ******* 2026-04-01 00:38:00.909349 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:38:00.909361 | orchestrator | 2026-04-01 00:38:00.909372 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-04-01 00:38:00.909383 | orchestrator | Wednesday 01 April 2026 00:38:00 +0000 (0:00:00.795) 0:08:28.010 ******* 2026-04-01 00:38:00.909393 | orchestrator | ok: [testbed-manager] 2026-04-01 00:38:00.909404 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:38:00.909424 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:38:00.909435 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:38:00.909446 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:38:00.909456 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:38:00.909467 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:38:00.909478 | orchestrator | 2026-04-01 00:38:00.909498 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-04-01 00:38:02.433036 | orchestrator | Wednesday 01 April 2026 00:38:00 +0000 (0:00:00.862) 0:08:28.873 ******* 2026-04-01 00:38:02.433135 | orchestrator | changed: [testbed-manager] 2026-04-01 00:38:02.433150 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:38:02.433162 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:38:02.433173 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:38:02.433184 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:38:02.433194 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:38:02.433220 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:38:02.433231 | orchestrator | 2026-04-01 00:38:02.433243 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 00:38:02.433255 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-04-01 00:38:02.433268 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-04-01 00:38:02.433279 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-04-01 00:38:02.433290 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-04-01 00:38:02.433300 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-04-01 00:38:02.433387 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-04-01 00:38:02.433399 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-04-01 00:38:02.433410 | orchestrator | 2026-04-01 00:38:02.433421 | orchestrator | 2026-04-01 00:38:02.433432 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 00:38:02.433443 | orchestrator | Wednesday 01 April 2026 00:38:02 +0000 (0:00:01.248) 0:08:30.122 ******* 2026-04-01 00:38:02.433454 | orchestrator | =============================================================================== 2026-04-01 00:38:02.433466 | orchestrator | osism.commons.packages : Install required packages --------------------- 81.57s 2026-04-01 00:38:02.433477 | orchestrator | osism.commons.packages : Download required packages -------------------- 39.17s 2026-04-01 00:38:02.433511 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 35.65s 2026-04-01 00:38:02.433522 | orchestrator | osism.commons.repository : Update package cache ------------------------ 17.82s 2026-04-01 00:38:02.433534 | orchestrator | osism.services.docker : Install docker package ------------------------- 11.94s 2026-04-01 00:38:02.433545 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 11.43s 2026-04-01 00:38:02.433556 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 11.10s 2026-04-01 00:38:02.433567 | orchestrator | osism.services.docker : Install containerd package --------------------- 10.48s 2026-04-01 00:38:02.433580 | orchestrator | osism.services.docker : Install docker-cli package --------------------- 10.42s 2026-04-01 00:38:02.433597 | orchestrator | osism.services.lldpd : Install lldpd package --------------------------- 10.29s 2026-04-01 00:38:02.433611 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 9.96s 2026-04-01 00:38:02.433646 | orchestrator | osism.services.rng : Install rng package -------------------------------- 9.83s 2026-04-01 00:38:02.433660 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 9.37s 2026-04-01 00:38:02.433673 | orchestrator | osism.services.docker : Add repository ---------------------------------- 9.20s 2026-04-01 00:38:02.433685 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 9.12s 2026-04-01 00:38:02.433698 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 8.30s 2026-04-01 00:38:02.433710 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 7.60s 2026-04-01 00:38:02.433723 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 6.93s 2026-04-01 00:38:02.433736 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 5.87s 2026-04-01 00:38:02.433749 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.41s 2026-04-01 00:38:02.610178 | orchestrator | + osism apply fail2ban 2026-04-01 00:38:14.374868 | orchestrator | 2026-04-01 00:38:14 | INFO  | Prepare task for execution of fail2ban. 2026-04-01 00:38:14.459660 | orchestrator | 2026-04-01 00:38:14 | INFO  | Task 65c4a76f-6072-4713-83be-4d5cc0a5a39a (fail2ban) was prepared for execution. 2026-04-01 00:38:14.459788 | orchestrator | 2026-04-01 00:38:14 | INFO  | It takes a moment until task 65c4a76f-6072-4713-83be-4d5cc0a5a39a (fail2ban) has been started and output is visible here. 2026-04-01 00:38:35.875720 | orchestrator | 2026-04-01 00:38:35.875828 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-04-01 00:38:35.875844 | orchestrator | 2026-04-01 00:38:35.875856 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-04-01 00:38:35.875885 | orchestrator | Wednesday 01 April 2026 00:38:17 +0000 (0:00:00.349) 0:00:00.349 ******* 2026-04-01 00:38:35.875908 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:38:35.875922 | orchestrator | 2026-04-01 00:38:35.875933 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-04-01 00:38:35.875945 | orchestrator | Wednesday 01 April 2026 00:38:19 +0000 (0:00:01.166) 0:00:01.516 ******* 2026-04-01 00:38:35.875956 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:38:35.875968 | orchestrator | changed: [testbed-manager] 2026-04-01 00:38:35.875979 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:38:35.875990 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:38:35.876000 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:38:35.876011 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:38:35.876022 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:38:35.876032 | orchestrator | 2026-04-01 00:38:35.876043 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-04-01 00:38:35.876054 | orchestrator | Wednesday 01 April 2026 00:38:31 +0000 (0:00:11.998) 0:00:13.514 ******* 2026-04-01 00:38:35.876065 | orchestrator | changed: [testbed-manager] 2026-04-01 00:38:35.876075 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:38:35.876086 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:38:35.876096 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:38:35.876107 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:38:35.876118 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:38:35.876128 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:38:35.876139 | orchestrator | 2026-04-01 00:38:35.876150 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-04-01 00:38:35.876161 | orchestrator | Wednesday 01 April 2026 00:38:32 +0000 (0:00:01.588) 0:00:15.102 ******* 2026-04-01 00:38:35.876171 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:38:35.876183 | orchestrator | ok: [testbed-manager] 2026-04-01 00:38:35.876194 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:38:35.876231 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:38:35.876243 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:38:35.876275 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:38:35.876289 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:38:35.876302 | orchestrator | 2026-04-01 00:38:35.876315 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-04-01 00:38:35.876328 | orchestrator | Wednesday 01 April 2026 00:38:33 +0000 (0:00:01.293) 0:00:16.396 ******* 2026-04-01 00:38:35.876341 | orchestrator | changed: [testbed-manager] 2026-04-01 00:38:35.876353 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:38:35.876366 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:38:35.876378 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:38:35.876390 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:38:35.876403 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:38:35.876415 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:38:35.876427 | orchestrator | 2026-04-01 00:38:35.876440 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 00:38:35.876453 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 00:38:35.876467 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 00:38:35.876479 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 00:38:35.876491 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 00:38:35.876517 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 00:38:35.876530 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 00:38:35.876542 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 00:38:35.876555 | orchestrator | 2026-04-01 00:38:35.876567 | orchestrator | 2026-04-01 00:38:35.876579 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 00:38:35.876592 | orchestrator | Wednesday 01 April 2026 00:38:35 +0000 (0:00:01.620) 0:00:18.016 ******* 2026-04-01 00:38:35.876604 | orchestrator | =============================================================================== 2026-04-01 00:38:35.876616 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 12.00s 2026-04-01 00:38:35.876629 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.62s 2026-04-01 00:38:35.876641 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.59s 2026-04-01 00:38:35.876653 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.29s 2026-04-01 00:38:35.876664 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 1.17s 2026-04-01 00:38:36.033151 | orchestrator | + osism apply network 2026-04-01 00:38:47.360867 | orchestrator | 2026-04-01 00:38:47 | INFO  | Prepare task for execution of network. 2026-04-01 00:38:47.436079 | orchestrator | 2026-04-01 00:38:47 | INFO  | Task dfb58c6e-ee1f-4c29-9d2d-7605c258a050 (network) was prepared for execution. 2026-04-01 00:38:47.436187 | orchestrator | 2026-04-01 00:38:47 | INFO  | It takes a moment until task dfb58c6e-ee1f-4c29-9d2d-7605c258a050 (network) has been started and output is visible here. 2026-04-01 00:39:13.322497 | orchestrator | 2026-04-01 00:39:13.322604 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-04-01 00:39:13.322621 | orchestrator | 2026-04-01 00:39:13.322633 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-04-01 00:39:13.322673 | orchestrator | Wednesday 01 April 2026 00:38:50 +0000 (0:00:00.296) 0:00:00.296 ******* 2026-04-01 00:39:13.322695 | orchestrator | ok: [testbed-manager] 2026-04-01 00:39:13.322714 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:39:13.322733 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:39:13.322751 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:39:13.322769 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:39:13.322786 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:39:13.322803 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:39:13.322822 | orchestrator | 2026-04-01 00:39:13.322842 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-04-01 00:39:13.322860 | orchestrator | Wednesday 01 April 2026 00:38:51 +0000 (0:00:00.551) 0:00:00.848 ******* 2026-04-01 00:39:13.322882 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:39:13.322902 | orchestrator | 2026-04-01 00:39:13.322914 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-04-01 00:39:13.322928 | orchestrator | Wednesday 01 April 2026 00:38:52 +0000 (0:00:01.036) 0:00:01.884 ******* 2026-04-01 00:39:13.322947 | orchestrator | ok: [testbed-manager] 2026-04-01 00:39:13.322965 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:39:13.322983 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:39:13.323001 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:39:13.323022 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:39:13.323041 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:39:13.323060 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:39:13.323080 | orchestrator | 2026-04-01 00:39:13.323099 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-04-01 00:39:13.323116 | orchestrator | Wednesday 01 April 2026 00:38:54 +0000 (0:00:02.595) 0:00:04.480 ******* 2026-04-01 00:39:13.323130 | orchestrator | ok: [testbed-manager] 2026-04-01 00:39:13.323142 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:39:13.323154 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:39:13.323166 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:39:13.323179 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:39:13.323267 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:39:13.323283 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:39:13.323297 | orchestrator | 2026-04-01 00:39:13.323308 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-04-01 00:39:13.323320 | orchestrator | Wednesday 01 April 2026 00:38:56 +0000 (0:00:01.635) 0:00:06.115 ******* 2026-04-01 00:39:13.323331 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-04-01 00:39:13.323342 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-04-01 00:39:13.323354 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-04-01 00:39:13.323365 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-04-01 00:39:13.323376 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-04-01 00:39:13.323387 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-04-01 00:39:13.323398 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-04-01 00:39:13.323409 | orchestrator | 2026-04-01 00:39:13.323459 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-04-01 00:39:13.323471 | orchestrator | Wednesday 01 April 2026 00:38:57 +0000 (0:00:01.070) 0:00:07.186 ******* 2026-04-01 00:39:13.323482 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-01 00:39:13.323494 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-01 00:39:13.323505 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-01 00:39:13.323516 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-01 00:39:13.323543 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-01 00:39:13.323555 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-01 00:39:13.323565 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-01 00:39:13.323577 | orchestrator | 2026-04-01 00:39:13.323601 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-04-01 00:39:13.323612 | orchestrator | Wednesday 01 April 2026 00:39:00 +0000 (0:00:03.047) 0:00:10.233 ******* 2026-04-01 00:39:13.323624 | orchestrator | changed: [testbed-manager] 2026-04-01 00:39:13.323635 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:39:13.323646 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:39:13.323657 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:39:13.323668 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:39:13.323679 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:39:13.323690 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:39:13.323700 | orchestrator | 2026-04-01 00:39:13.323711 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-04-01 00:39:13.323722 | orchestrator | Wednesday 01 April 2026 00:39:01 +0000 (0:00:01.504) 0:00:11.738 ******* 2026-04-01 00:39:13.323733 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-01 00:39:13.323744 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-01 00:39:13.323755 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-01 00:39:13.323766 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-01 00:39:13.323777 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-01 00:39:13.323788 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-01 00:39:13.323799 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-01 00:39:13.323811 | orchestrator | 2026-04-01 00:39:13.323822 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-04-01 00:39:13.323833 | orchestrator | Wednesday 01 April 2026 00:39:03 +0000 (0:00:01.668) 0:00:13.407 ******* 2026-04-01 00:39:13.323844 | orchestrator | ok: [testbed-manager] 2026-04-01 00:39:13.323855 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:39:13.323866 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:39:13.323877 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:39:13.323888 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:39:13.323899 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:39:13.323909 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:39:13.323920 | orchestrator | 2026-04-01 00:39:13.323932 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-04-01 00:39:13.323961 | orchestrator | Wednesday 01 April 2026 00:39:04 +0000 (0:00:00.880) 0:00:14.288 ******* 2026-04-01 00:39:13.323973 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:39:13.323984 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:39:13.323995 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:39:13.324006 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:39:13.324017 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:39:13.324028 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:39:13.324039 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:39:13.324050 | orchestrator | 2026-04-01 00:39:13.324061 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-04-01 00:39:13.324072 | orchestrator | Wednesday 01 April 2026 00:39:05 +0000 (0:00:00.672) 0:00:14.960 ******* 2026-04-01 00:39:13.324083 | orchestrator | ok: [testbed-manager] 2026-04-01 00:39:13.324094 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:39:13.324105 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:39:13.324116 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:39:13.324127 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:39:13.324138 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:39:13.324149 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:39:13.324160 | orchestrator | 2026-04-01 00:39:13.324171 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-04-01 00:39:13.324182 | orchestrator | Wednesday 01 April 2026 00:39:07 +0000 (0:00:02.179) 0:00:17.140 ******* 2026-04-01 00:39:13.324217 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:39:13.324237 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:39:13.324248 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:39:13.324259 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:39:13.324270 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:39:13.324289 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:39:13.324301 | orchestrator | changed: [testbed-manager] => (item={'src': '/opt/configuration/network/iptables.sh', 'dest': 'routable.d/iptables.sh'}) 2026-04-01 00:39:13.324313 | orchestrator | 2026-04-01 00:39:13.324324 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-04-01 00:39:13.324335 | orchestrator | Wednesday 01 April 2026 00:39:08 +0000 (0:00:00.776) 0:00:17.916 ******* 2026-04-01 00:39:13.324346 | orchestrator | ok: [testbed-manager] 2026-04-01 00:39:13.324357 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:39:13.324368 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:39:13.324379 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:39:13.324389 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:39:13.324400 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:39:13.324411 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:39:13.324422 | orchestrator | 2026-04-01 00:39:13.324433 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-04-01 00:39:13.324444 | orchestrator | Wednesday 01 April 2026 00:39:09 +0000 (0:00:01.439) 0:00:19.356 ******* 2026-04-01 00:39:13.324456 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:39:13.324469 | orchestrator | 2026-04-01 00:39:13.324480 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-04-01 00:39:13.324491 | orchestrator | Wednesday 01 April 2026 00:39:10 +0000 (0:00:01.117) 0:00:20.473 ******* 2026-04-01 00:39:13.324502 | orchestrator | ok: [testbed-manager] 2026-04-01 00:39:13.324513 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:39:13.324524 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:39:13.324534 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:39:13.324545 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:39:13.324556 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:39:13.324567 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:39:13.324578 | orchestrator | 2026-04-01 00:39:13.324594 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-04-01 00:39:13.324620 | orchestrator | Wednesday 01 April 2026 00:39:11 +0000 (0:00:01.030) 0:00:21.504 ******* 2026-04-01 00:39:13.324648 | orchestrator | ok: [testbed-manager] 2026-04-01 00:39:13.324669 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:39:13.324687 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:39:13.324704 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:39:13.324722 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:39:13.324741 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:39:13.324760 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:39:13.324777 | orchestrator | 2026-04-01 00:39:13.324796 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-04-01 00:39:13.324808 | orchestrator | Wednesday 01 April 2026 00:39:12 +0000 (0:00:00.651) 0:00:22.155 ******* 2026-04-01 00:39:13.324819 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-04-01 00:39:13.324830 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-04-01 00:39:13.324841 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-04-01 00:39:13.324852 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-04-01 00:39:13.324863 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-01 00:39:13.324874 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-04-01 00:39:13.324885 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-01 00:39:13.324896 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-04-01 00:39:13.324907 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-01 00:39:13.324917 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-04-01 00:39:13.324939 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-01 00:39:13.324950 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-01 00:39:13.324961 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-01 00:39:13.324972 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-01 00:39:13.324983 | orchestrator | 2026-04-01 00:39:13.325004 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-04-01 00:39:27.948507 | orchestrator | Wednesday 01 April 2026 00:39:13 +0000 (0:00:00.990) 0:00:23.146 ******* 2026-04-01 00:39:27.948630 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:39:27.948657 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:39:27.948677 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:39:27.948695 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:39:27.948714 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:39:27.948731 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:39:27.948750 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:39:27.948770 | orchestrator | 2026-04-01 00:39:27.948790 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-04-01 00:39:27.948810 | orchestrator | Wednesday 01 April 2026 00:39:14 +0000 (0:00:00.733) 0:00:23.880 ******* 2026-04-01 00:39:27.948831 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-node-1, testbed-manager, testbed-node-0, testbed-node-3, testbed-node-5, testbed-node-2, testbed-node-4 2026-04-01 00:39:27.948852 | orchestrator | 2026-04-01 00:39:27.948871 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-04-01 00:39:27.948888 | orchestrator | Wednesday 01 April 2026 00:39:17 +0000 (0:00:03.825) 0:00:27.705 ******* 2026-04-01 00:39:27.948909 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.112.5/20']}}) 2026-04-01 00:39:27.948932 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-01 00:39:27.948952 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-01 00:39:27.948972 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.128.5/20']}}) 2026-04-01 00:39:27.948993 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-01 00:39:27.949013 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': []}}) 2026-04-01 00:39:27.949055 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-01 00:39:27.949078 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-01 00:39:27.949132 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.10/20']}}) 2026-04-01 00:39:27.949162 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.12/20']}}) 2026-04-01 00:39:27.949243 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.11/20']}}) 2026-04-01 00:39:27.949279 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.14/20']}}) 2026-04-01 00:39:27.949293 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': ['192.168.128.15/20']}}) 2026-04-01 00:39:27.949307 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.13/20']}}) 2026-04-01 00:39:27.949319 | orchestrator | 2026-04-01 00:39:27.949333 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-04-01 00:39:27.949346 | orchestrator | Wednesday 01 April 2026 00:39:23 +0000 (0:00:05.141) 0:00:32.847 ******* 2026-04-01 00:39:27.949359 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-01 00:39:27.949374 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.112.5/20']}}) 2026-04-01 00:39:27.949387 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.10/20']}}) 2026-04-01 00:39:27.949398 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-01 00:39:27.949409 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-01 00:39:27.949420 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': []}}) 2026-04-01 00:39:27.949438 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.128.5/20']}}) 2026-04-01 00:39:27.949463 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-01 00:39:27.949475 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-01 00:39:27.949486 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.13/20']}}) 2026-04-01 00:39:27.949498 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.11/20']}}) 2026-04-01 00:39:27.949509 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': ['192.168.128.15/20']}}) 2026-04-01 00:39:27.949532 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.12/20']}}) 2026-04-01 00:39:39.925094 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.14/20']}}) 2026-04-01 00:39:39.925250 | orchestrator | 2026-04-01 00:39:39.925270 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-04-01 00:39:39.925283 | orchestrator | Wednesday 01 April 2026 00:39:28 +0000 (0:00:05.310) 0:00:38.157 ******* 2026-04-01 00:39:39.925297 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:39:39.925309 | orchestrator | 2026-04-01 00:39:39.925320 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-04-01 00:39:39.925331 | orchestrator | Wednesday 01 April 2026 00:39:29 +0000 (0:00:01.080) 0:00:39.237 ******* 2026-04-01 00:39:39.925342 | orchestrator | ok: [testbed-manager] 2026-04-01 00:39:39.925354 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:39:39.925365 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:39:39.925376 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:39:39.925387 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:39:39.925397 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:39:39.925408 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:39:39.925418 | orchestrator | 2026-04-01 00:39:39.925429 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-04-01 00:39:39.925441 | orchestrator | Wednesday 01 April 2026 00:39:30 +0000 (0:00:01.132) 0:00:40.370 ******* 2026-04-01 00:39:39.925452 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-01 00:39:39.925463 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-01 00:39:39.925474 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-01 00:39:39.925510 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-01 00:39:39.925522 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:39:39.925534 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-01 00:39:39.925545 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-01 00:39:39.925556 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-01 00:39:39.925567 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-01 00:39:39.925577 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:39:39.925588 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-01 00:39:39.925599 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-01 00:39:39.925612 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-01 00:39:39.925625 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-01 00:39:39.925638 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:39:39.925651 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-01 00:39:39.925663 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-01 00:39:39.925676 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-01 00:39:39.925689 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-01 00:39:39.925701 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:39:39.925714 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-01 00:39:39.925726 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-01 00:39:39.925738 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-01 00:39:39.925749 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-01 00:39:39.925759 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:39:39.925789 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-01 00:39:39.925800 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-01 00:39:39.925811 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-01 00:39:39.925822 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-01 00:39:39.925832 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:39:39.925843 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-01 00:39:39.925854 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-01 00:39:39.925865 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-01 00:39:39.925875 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-01 00:39:39.925886 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:39:39.925897 | orchestrator | 2026-04-01 00:39:39.925916 | orchestrator | TASK [osism.commons.network : Include network extra init] ********************** 2026-04-01 00:39:39.925957 | orchestrator | Wednesday 01 April 2026 00:39:31 +0000 (0:00:00.758) 0:00:41.128 ******* 2026-04-01 00:39:39.925978 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/network-extra-init.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:39:39.925997 | orchestrator | 2026-04-01 00:39:39.926074 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init script] **************** 2026-04-01 00:39:39.926115 | orchestrator | Wednesday 01 April 2026 00:39:32 +0000 (0:00:01.119) 0:00:42.248 ******* 2026-04-01 00:39:39.926132 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:39:39.926144 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:39:39.926182 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:39:39.926194 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:39:39.926205 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:39:39.926216 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:39:39.926226 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:39:39.926237 | orchestrator | 2026-04-01 00:39:39.926248 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init systemd service] ******* 2026-04-01 00:39:39.926259 | orchestrator | Wednesday 01 April 2026 00:39:33 +0000 (0:00:00.638) 0:00:42.887 ******* 2026-04-01 00:39:39.926270 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:39:39.926281 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:39:39.926292 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:39:39.926303 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:39:39.926313 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:39:39.926324 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:39:39.926335 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:39:39.926346 | orchestrator | 2026-04-01 00:39:39.926357 | orchestrator | TASK [osism.commons.network : Enable and start network-extra-init service] ***** 2026-04-01 00:39:39.926368 | orchestrator | Wednesday 01 April 2026 00:39:33 +0000 (0:00:00.536) 0:00:43.423 ******* 2026-04-01 00:39:39.926378 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:39:39.926389 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:39:39.926400 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:39:39.926411 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:39:39.926422 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:39:39.926433 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:39:39.926443 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:39:39.926454 | orchestrator | 2026-04-01 00:39:39.926465 | orchestrator | TASK [osism.commons.network : Disable and stop network-extra-init service] ***** 2026-04-01 00:39:39.926476 | orchestrator | Wednesday 01 April 2026 00:39:34 +0000 (0:00:00.646) 0:00:44.070 ******* 2026-04-01 00:39:39.926487 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:39:39.926498 | orchestrator | ok: [testbed-manager] 2026-04-01 00:39:39.926508 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:39:39.926519 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:39:39.926530 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:39:39.926541 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:39:39.926552 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:39:39.926563 | orchestrator | 2026-04-01 00:39:39.926574 | orchestrator | TASK [osism.commons.network : Remove network-extra-init systemd service] ******* 2026-04-01 00:39:39.926585 | orchestrator | Wednesday 01 April 2026 00:39:35 +0000 (0:00:01.474) 0:00:45.545 ******* 2026-04-01 00:39:39.926595 | orchestrator | ok: [testbed-manager] 2026-04-01 00:39:39.926606 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:39:39.926617 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:39:39.926628 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:39:39.926638 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:39:39.926649 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:39:39.926660 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:39:39.926670 | orchestrator | 2026-04-01 00:39:39.926700 | orchestrator | TASK [osism.commons.network : Remove network-extra-init script] **************** 2026-04-01 00:39:39.926712 | orchestrator | Wednesday 01 April 2026 00:39:36 +0000 (0:00:01.062) 0:00:46.607 ******* 2026-04-01 00:39:39.926734 | orchestrator | ok: [testbed-manager] 2026-04-01 00:39:39.926745 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:39:39.926760 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:39:39.926771 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:39:39.926782 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:39:39.926793 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:39:39.926803 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:39:39.926814 | orchestrator | 2026-04-01 00:39:39.926833 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-04-01 00:39:39.926844 | orchestrator | Wednesday 01 April 2026 00:39:38 +0000 (0:00:02.035) 0:00:48.642 ******* 2026-04-01 00:39:39.926855 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:39:39.926866 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:39:39.926876 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:39:39.926887 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:39:39.926898 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:39:39.926909 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:39:39.926919 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:39:39.926930 | orchestrator | 2026-04-01 00:39:39.926941 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-04-01 00:39:39.926952 | orchestrator | Wednesday 01 April 2026 00:39:39 +0000 (0:00:00.541) 0:00:49.184 ******* 2026-04-01 00:39:39.926963 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:39:39.926974 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:39:39.926985 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:39:39.926995 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:39:39.927006 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:39:39.927017 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:39:39.927027 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:39:39.927038 | orchestrator | 2026-04-01 00:39:39.927049 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 00:39:39.927062 | orchestrator | testbed-manager : ok=25  changed=5  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-04-01 00:39:39.927074 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-01 00:39:39.927106 | orchestrator | testbed-node-1 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-01 00:39:40.073748 | orchestrator | testbed-node-2 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-01 00:39:40.073864 | orchestrator | testbed-node-3 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-01 00:39:40.073887 | orchestrator | testbed-node-4 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-01 00:39:40.073903 | orchestrator | testbed-node-5 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-01 00:39:40.073919 | orchestrator | 2026-04-01 00:39:40.073935 | orchestrator | 2026-04-01 00:39:40.073951 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 00:39:40.073967 | orchestrator | Wednesday 01 April 2026 00:39:39 +0000 (0:00:00.561) 0:00:49.746 ******* 2026-04-01 00:39:40.073981 | orchestrator | =============================================================================== 2026-04-01 00:39:40.073996 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.31s 2026-04-01 00:39:40.074011 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 5.14s 2026-04-01 00:39:40.074093 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 3.83s 2026-04-01 00:39:40.074109 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.05s 2026-04-01 00:39:40.074127 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.60s 2026-04-01 00:39:40.074201 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.18s 2026-04-01 00:39:40.074225 | orchestrator | osism.commons.network : Remove network-extra-init script ---------------- 2.04s 2026-04-01 00:39:40.074245 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.67s 2026-04-01 00:39:40.074297 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.64s 2026-04-01 00:39:40.074312 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.50s 2026-04-01 00:39:40.074325 | orchestrator | osism.commons.network : Disable and stop network-extra-init service ----- 1.47s 2026-04-01 00:39:40.074339 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.44s 2026-04-01 00:39:40.074351 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.13s 2026-04-01 00:39:40.074364 | orchestrator | osism.commons.network : Include network extra init ---------------------- 1.12s 2026-04-01 00:39:40.074377 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.12s 2026-04-01 00:39:40.074389 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.08s 2026-04-01 00:39:40.074402 | orchestrator | osism.commons.network : Create required directories --------------------- 1.07s 2026-04-01 00:39:40.074429 | orchestrator | osism.commons.network : Remove network-extra-init systemd service ------- 1.06s 2026-04-01 00:39:40.074443 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.04s 2026-04-01 00:39:40.074456 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.03s 2026-04-01 00:39:40.199592 | orchestrator | + osism apply wireguard 2026-04-01 00:39:51.352613 | orchestrator | 2026-04-01 00:39:51 | INFO  | Prepare task for execution of wireguard. 2026-04-01 00:39:51.429713 | orchestrator | 2026-04-01 00:39:51 | INFO  | Task bfa989c6-42ae-47fc-8fdd-04e011db3f13 (wireguard) was prepared for execution. 2026-04-01 00:39:51.429811 | orchestrator | 2026-04-01 00:39:51 | INFO  | It takes a moment until task bfa989c6-42ae-47fc-8fdd-04e011db3f13 (wireguard) has been started and output is visible here. 2026-04-01 00:40:08.801069 | orchestrator | 2026-04-01 00:40:08.801189 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-04-01 00:40:08.801205 | orchestrator | 2026-04-01 00:40:08.801216 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-04-01 00:40:08.801226 | orchestrator | Wednesday 01 April 2026 00:39:54 +0000 (0:00:00.292) 0:00:00.292 ******* 2026-04-01 00:40:08.801236 | orchestrator | ok: [testbed-manager] 2026-04-01 00:40:08.801247 | orchestrator | 2026-04-01 00:40:08.801257 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-04-01 00:40:08.801277 | orchestrator | Wednesday 01 April 2026 00:39:56 +0000 (0:00:01.614) 0:00:01.906 ******* 2026-04-01 00:40:08.801288 | orchestrator | changed: [testbed-manager] 2026-04-01 00:40:08.801298 | orchestrator | 2026-04-01 00:40:08.801308 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-04-01 00:40:08.801318 | orchestrator | Wednesday 01 April 2026 00:40:01 +0000 (0:00:05.098) 0:00:07.005 ******* 2026-04-01 00:40:08.801328 | orchestrator | changed: [testbed-manager] 2026-04-01 00:40:08.801337 | orchestrator | 2026-04-01 00:40:08.801347 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-04-01 00:40:08.801357 | orchestrator | Wednesday 01 April 2026 00:40:01 +0000 (0:00:00.495) 0:00:07.501 ******* 2026-04-01 00:40:08.801366 | orchestrator | changed: [testbed-manager] 2026-04-01 00:40:08.801376 | orchestrator | 2026-04-01 00:40:08.801386 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-04-01 00:40:08.801395 | orchestrator | Wednesday 01 April 2026 00:40:02 +0000 (0:00:00.393) 0:00:07.895 ******* 2026-04-01 00:40:08.801405 | orchestrator | ok: [testbed-manager] 2026-04-01 00:40:08.801415 | orchestrator | 2026-04-01 00:40:08.801425 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-04-01 00:40:08.801435 | orchestrator | Wednesday 01 April 2026 00:40:02 +0000 (0:00:00.487) 0:00:08.383 ******* 2026-04-01 00:40:08.801445 | orchestrator | ok: [testbed-manager] 2026-04-01 00:40:08.801455 | orchestrator | 2026-04-01 00:40:08.801464 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-04-01 00:40:08.801494 | orchestrator | Wednesday 01 April 2026 00:40:03 +0000 (0:00:00.367) 0:00:08.750 ******* 2026-04-01 00:40:08.801505 | orchestrator | ok: [testbed-manager] 2026-04-01 00:40:08.801514 | orchestrator | 2026-04-01 00:40:08.801524 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-04-01 00:40:08.801534 | orchestrator | Wednesday 01 April 2026 00:40:03 +0000 (0:00:00.377) 0:00:09.128 ******* 2026-04-01 00:40:08.801543 | orchestrator | changed: [testbed-manager] 2026-04-01 00:40:08.801553 | orchestrator | 2026-04-01 00:40:08.801562 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-04-01 00:40:08.801572 | orchestrator | Wednesday 01 April 2026 00:40:04 +0000 (0:00:01.063) 0:00:10.191 ******* 2026-04-01 00:40:08.801582 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-01 00:40:08.801591 | orchestrator | changed: [testbed-manager] 2026-04-01 00:40:08.801601 | orchestrator | 2026-04-01 00:40:08.801610 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-04-01 00:40:08.801620 | orchestrator | Wednesday 01 April 2026 00:40:05 +0000 (0:00:00.938) 0:00:11.130 ******* 2026-04-01 00:40:08.801630 | orchestrator | changed: [testbed-manager] 2026-04-01 00:40:08.801641 | orchestrator | 2026-04-01 00:40:08.801653 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-04-01 00:40:08.801665 | orchestrator | Wednesday 01 April 2026 00:40:07 +0000 (0:00:02.017) 0:00:13.148 ******* 2026-04-01 00:40:08.801677 | orchestrator | changed: [testbed-manager] 2026-04-01 00:40:08.801688 | orchestrator | 2026-04-01 00:40:08.801699 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 00:40:08.801712 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 00:40:08.801724 | orchestrator | 2026-04-01 00:40:08.801735 | orchestrator | 2026-04-01 00:40:08.801747 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 00:40:08.801758 | orchestrator | Wednesday 01 April 2026 00:40:08 +0000 (0:00:00.943) 0:00:14.092 ******* 2026-04-01 00:40:08.801769 | orchestrator | =============================================================================== 2026-04-01 00:40:08.801781 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 5.10s 2026-04-01 00:40:08.801792 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 2.02s 2026-04-01 00:40:08.801803 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.61s 2026-04-01 00:40:08.801814 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.06s 2026-04-01 00:40:08.801844 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.94s 2026-04-01 00:40:08.801855 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.94s 2026-04-01 00:40:08.801867 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.50s 2026-04-01 00:40:08.801878 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.49s 2026-04-01 00:40:08.801900 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.39s 2026-04-01 00:40:08.801912 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.38s 2026-04-01 00:40:08.801922 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.37s 2026-04-01 00:40:08.961772 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-04-01 00:40:08.994158 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-04-01 00:40:08.994221 | orchestrator | Dload Upload Total Spent Left Speed 2026-04-01 00:40:09.068228 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 206 0 --:--:-- --:--:-- --:--:-- 208 2026-04-01 00:40:09.082394 | orchestrator | + osism apply --environment custom workarounds 2026-04-01 00:40:10.306170 | orchestrator | 2026-04-01 00:40:10 | INFO  | Trying to run play workarounds in environment custom 2026-04-01 00:40:20.451646 | orchestrator | 2026-04-01 00:40:20 | INFO  | Prepare task for execution of workarounds. 2026-04-01 00:40:20.524445 | orchestrator | 2026-04-01 00:40:20 | INFO  | Task 82229373-e5cd-4b34-8f19-c8dbb953072a (workarounds) was prepared for execution. 2026-04-01 00:40:20.524570 | orchestrator | 2026-04-01 00:40:20 | INFO  | It takes a moment until task 82229373-e5cd-4b34-8f19-c8dbb953072a (workarounds) has been started and output is visible here. 2026-04-01 00:40:44.744642 | orchestrator | 2026-04-01 00:40:44.744785 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-01 00:40:44.744813 | orchestrator | 2026-04-01 00:40:44.744833 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-04-01 00:40:44.744851 | orchestrator | Wednesday 01 April 2026 00:40:23 +0000 (0:00:00.162) 0:00:00.162 ******* 2026-04-01 00:40:44.744869 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-04-01 00:40:44.744887 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-04-01 00:40:44.744906 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-04-01 00:40:44.744923 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-04-01 00:40:44.744940 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-04-01 00:40:44.744958 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-04-01 00:40:44.744976 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-04-01 00:40:44.744994 | orchestrator | 2026-04-01 00:40:44.745010 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-04-01 00:40:44.745026 | orchestrator | 2026-04-01 00:40:44.745041 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-04-01 00:40:44.745087 | orchestrator | Wednesday 01 April 2026 00:40:24 +0000 (0:00:00.632) 0:00:00.795 ******* 2026-04-01 00:40:44.745107 | orchestrator | ok: [testbed-manager] 2026-04-01 00:40:44.745128 | orchestrator | 2026-04-01 00:40:44.745146 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-04-01 00:40:44.745187 | orchestrator | 2026-04-01 00:40:44.745279 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-04-01 00:40:44.745295 | orchestrator | Wednesday 01 April 2026 00:40:26 +0000 (0:00:02.376) 0:00:03.172 ******* 2026-04-01 00:40:44.745309 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:40:44.745322 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:40:44.745334 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:40:44.745347 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:40:44.745360 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:40:44.745372 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:40:44.745385 | orchestrator | 2026-04-01 00:40:44.745398 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-04-01 00:40:44.745411 | orchestrator | 2026-04-01 00:40:44.745425 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-04-01 00:40:44.745439 | orchestrator | Wednesday 01 April 2026 00:40:28 +0000 (0:00:02.356) 0:00:05.529 ******* 2026-04-01 00:40:44.745453 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-01 00:40:44.745467 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-01 00:40:44.745479 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-01 00:40:44.745492 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-01 00:40:44.745504 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-01 00:40:44.745517 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-01 00:40:44.745558 | orchestrator | 2026-04-01 00:40:44.745572 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-04-01 00:40:44.745585 | orchestrator | Wednesday 01 April 2026 00:40:30 +0000 (0:00:01.328) 0:00:06.858 ******* 2026-04-01 00:40:44.745598 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:40:44.745611 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:40:44.745624 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:40:44.745636 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:40:44.745649 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:40:44.745662 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:40:44.745674 | orchestrator | 2026-04-01 00:40:44.745687 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-04-01 00:40:44.745717 | orchestrator | Wednesday 01 April 2026 00:40:34 +0000 (0:00:03.949) 0:00:10.807 ******* 2026-04-01 00:40:44.745730 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:40:44.745743 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:40:44.745755 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:40:44.745768 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:40:44.745781 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:40:44.745794 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:40:44.745807 | orchestrator | 2026-04-01 00:40:44.745820 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-04-01 00:40:44.745833 | orchestrator | 2026-04-01 00:40:44.745845 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-04-01 00:40:44.745858 | orchestrator | Wednesday 01 April 2026 00:40:34 +0000 (0:00:00.455) 0:00:11.263 ******* 2026-04-01 00:40:44.745871 | orchestrator | changed: [testbed-manager] 2026-04-01 00:40:44.745883 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:40:44.745896 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:40:44.745908 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:40:44.745920 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:40:44.745932 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:40:44.745945 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:40:44.745957 | orchestrator | 2026-04-01 00:40:44.745969 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-04-01 00:40:44.745981 | orchestrator | Wednesday 01 April 2026 00:40:36 +0000 (0:00:01.791) 0:00:13.055 ******* 2026-04-01 00:40:44.745994 | orchestrator | changed: [testbed-manager] 2026-04-01 00:40:44.746007 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:40:44.746148 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:40:44.746170 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:40:44.746183 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:40:44.746196 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:40:44.746232 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:40:44.746247 | orchestrator | 2026-04-01 00:40:44.746260 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-04-01 00:40:44.746273 | orchestrator | Wednesday 01 April 2026 00:40:37 +0000 (0:00:01.481) 0:00:14.536 ******* 2026-04-01 00:40:44.746285 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:40:44.746298 | orchestrator | ok: [testbed-manager] 2026-04-01 00:40:44.746310 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:40:44.746322 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:40:44.746334 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:40:44.746346 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:40:44.746358 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:40:44.746371 | orchestrator | 2026-04-01 00:40:44.746384 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-04-01 00:40:44.746396 | orchestrator | Wednesday 01 April 2026 00:40:39 +0000 (0:00:01.552) 0:00:16.089 ******* 2026-04-01 00:40:44.746409 | orchestrator | changed: [testbed-manager] 2026-04-01 00:40:44.746421 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:40:44.746434 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:40:44.746447 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:40:44.746459 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:40:44.746480 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:40:44.746497 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:40:44.746516 | orchestrator | 2026-04-01 00:40:44.746534 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-04-01 00:40:44.746553 | orchestrator | Wednesday 01 April 2026 00:40:40 +0000 (0:00:01.676) 0:00:17.765 ******* 2026-04-01 00:40:44.746570 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:40:44.746589 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:40:44.746606 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:40:44.746625 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:40:44.746643 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:40:44.746662 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:40:44.746682 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:40:44.746701 | orchestrator | 2026-04-01 00:40:44.746719 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-04-01 00:40:44.746732 | orchestrator | 2026-04-01 00:40:44.746746 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-04-01 00:40:44.746758 | orchestrator | Wednesday 01 April 2026 00:40:41 +0000 (0:00:00.712) 0:00:18.478 ******* 2026-04-01 00:40:44.746771 | orchestrator | ok: [testbed-manager] 2026-04-01 00:40:44.746783 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:40:44.746796 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:40:44.746809 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:40:44.746822 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:40:44.746834 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:40:44.746846 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:40:44.746858 | orchestrator | 2026-04-01 00:40:44.746870 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 00:40:44.746884 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-01 00:40:44.746898 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-01 00:40:44.746911 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-01 00:40:44.746923 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-01 00:40:44.746936 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-01 00:40:44.746949 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-01 00:40:44.746969 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-01 00:40:44.746981 | orchestrator | 2026-04-01 00:40:44.746994 | orchestrator | 2026-04-01 00:40:44.747006 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 00:40:44.747018 | orchestrator | Wednesday 01 April 2026 00:40:44 +0000 (0:00:03.022) 0:00:21.501 ******* 2026-04-01 00:40:44.747031 | orchestrator | =============================================================================== 2026-04-01 00:40:44.747044 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.95s 2026-04-01 00:40:44.747087 | orchestrator | Install python3-docker -------------------------------------------------- 3.02s 2026-04-01 00:40:44.747099 | orchestrator | Apply netplan configuration --------------------------------------------- 2.38s 2026-04-01 00:40:44.747112 | orchestrator | Apply netplan configuration --------------------------------------------- 2.36s 2026-04-01 00:40:44.747124 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.79s 2026-04-01 00:40:44.747147 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.68s 2026-04-01 00:40:44.747160 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.55s 2026-04-01 00:40:44.747172 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.48s 2026-04-01 00:40:44.747184 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.33s 2026-04-01 00:40:44.747197 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.71s 2026-04-01 00:40:44.747209 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.63s 2026-04-01 00:40:44.747258 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.46s 2026-04-01 00:40:45.038263 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-04-01 00:40:56.248149 | orchestrator | 2026-04-01 00:40:56 | INFO  | Prepare task for execution of reboot. 2026-04-01 00:40:56.321573 | orchestrator | 2026-04-01 00:40:56 | INFO  | Task 2ccee485-46f2-40e3-af30-121358e0b2a5 (reboot) was prepared for execution. 2026-04-01 00:40:56.321664 | orchestrator | 2026-04-01 00:40:56 | INFO  | It takes a moment until task 2ccee485-46f2-40e3-af30-121358e0b2a5 (reboot) has been started and output is visible here. 2026-04-01 00:41:07.113231 | orchestrator | 2026-04-01 00:41:07.113310 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-01 00:41:07.113322 | orchestrator | 2026-04-01 00:41:07.113331 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-01 00:41:07.113340 | orchestrator | Wednesday 01 April 2026 00:40:59 +0000 (0:00:00.236) 0:00:00.236 ******* 2026-04-01 00:41:07.113348 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:41:07.113357 | orchestrator | 2026-04-01 00:41:07.113366 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-01 00:41:07.113374 | orchestrator | Wednesday 01 April 2026 00:40:59 +0000 (0:00:00.140) 0:00:00.376 ******* 2026-04-01 00:41:07.113382 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:41:07.113390 | orchestrator | 2026-04-01 00:41:07.113398 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-01 00:41:07.113406 | orchestrator | Wednesday 01 April 2026 00:41:00 +0000 (0:00:01.141) 0:00:01.518 ******* 2026-04-01 00:41:07.113415 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:41:07.113423 | orchestrator | 2026-04-01 00:41:07.113431 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-01 00:41:07.113440 | orchestrator | 2026-04-01 00:41:07.113448 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-01 00:41:07.113456 | orchestrator | Wednesday 01 April 2026 00:41:00 +0000 (0:00:00.097) 0:00:01.615 ******* 2026-04-01 00:41:07.113464 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:41:07.113472 | orchestrator | 2026-04-01 00:41:07.113481 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-01 00:41:07.113489 | orchestrator | Wednesday 01 April 2026 00:41:01 +0000 (0:00:00.102) 0:00:01.718 ******* 2026-04-01 00:41:07.113497 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:41:07.113505 | orchestrator | 2026-04-01 00:41:07.113513 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-01 00:41:07.113522 | orchestrator | Wednesday 01 April 2026 00:41:02 +0000 (0:00:00.996) 0:00:02.715 ******* 2026-04-01 00:41:07.113530 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:41:07.113538 | orchestrator | 2026-04-01 00:41:07.113546 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-01 00:41:07.113555 | orchestrator | 2026-04-01 00:41:07.113563 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-01 00:41:07.113571 | orchestrator | Wednesday 01 April 2026 00:41:02 +0000 (0:00:00.096) 0:00:02.811 ******* 2026-04-01 00:41:07.113580 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:41:07.113588 | orchestrator | 2026-04-01 00:41:07.113596 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-01 00:41:07.113624 | orchestrator | Wednesday 01 April 2026 00:41:02 +0000 (0:00:00.088) 0:00:02.899 ******* 2026-04-01 00:41:07.113633 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:41:07.113641 | orchestrator | 2026-04-01 00:41:07.113649 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-01 00:41:07.113657 | orchestrator | Wednesday 01 April 2026 00:41:03 +0000 (0:00:00.971) 0:00:03.871 ******* 2026-04-01 00:41:07.113666 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:41:07.113674 | orchestrator | 2026-04-01 00:41:07.113682 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-01 00:41:07.113690 | orchestrator | 2026-04-01 00:41:07.113698 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-01 00:41:07.113706 | orchestrator | Wednesday 01 April 2026 00:41:03 +0000 (0:00:00.099) 0:00:03.971 ******* 2026-04-01 00:41:07.113714 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:41:07.113723 | orchestrator | 2026-04-01 00:41:07.113731 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-01 00:41:07.113748 | orchestrator | Wednesday 01 April 2026 00:41:03 +0000 (0:00:00.125) 0:00:04.097 ******* 2026-04-01 00:41:07.113756 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:41:07.113765 | orchestrator | 2026-04-01 00:41:07.113773 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-01 00:41:07.113781 | orchestrator | Wednesday 01 April 2026 00:41:04 +0000 (0:00:01.010) 0:00:05.108 ******* 2026-04-01 00:41:07.113789 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:41:07.113797 | orchestrator | 2026-04-01 00:41:07.113806 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-01 00:41:07.113815 | orchestrator | 2026-04-01 00:41:07.113824 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-01 00:41:07.113833 | orchestrator | Wednesday 01 April 2026 00:41:04 +0000 (0:00:00.114) 0:00:05.222 ******* 2026-04-01 00:41:07.113841 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:41:07.113850 | orchestrator | 2026-04-01 00:41:07.113859 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-01 00:41:07.113868 | orchestrator | Wednesday 01 April 2026 00:41:04 +0000 (0:00:00.096) 0:00:05.319 ******* 2026-04-01 00:41:07.113876 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:41:07.113885 | orchestrator | 2026-04-01 00:41:07.113894 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-01 00:41:07.113903 | orchestrator | Wednesday 01 April 2026 00:41:05 +0000 (0:00:01.051) 0:00:06.370 ******* 2026-04-01 00:41:07.113911 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:41:07.113921 | orchestrator | 2026-04-01 00:41:07.113930 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-01 00:41:07.113939 | orchestrator | 2026-04-01 00:41:07.113948 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-01 00:41:07.113957 | orchestrator | Wednesday 01 April 2026 00:41:05 +0000 (0:00:00.099) 0:00:06.470 ******* 2026-04-01 00:41:07.113966 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:41:07.113974 | orchestrator | 2026-04-01 00:41:07.113983 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-01 00:41:07.113991 | orchestrator | Wednesday 01 April 2026 00:41:05 +0000 (0:00:00.088) 0:00:06.558 ******* 2026-04-01 00:41:07.114000 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:41:07.114008 | orchestrator | 2026-04-01 00:41:07.114105 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-01 00:41:07.114114 | orchestrator | Wednesday 01 April 2026 00:41:06 +0000 (0:00:01.036) 0:00:07.595 ******* 2026-04-01 00:41:07.114137 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:41:07.114146 | orchestrator | 2026-04-01 00:41:07.114155 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 00:41:07.114170 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-01 00:41:07.114189 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-01 00:41:07.114198 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-01 00:41:07.114207 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-01 00:41:07.114217 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-01 00:41:07.114226 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-01 00:41:07.114236 | orchestrator | 2026-04-01 00:41:07.114245 | orchestrator | 2026-04-01 00:41:07.114254 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 00:41:07.114262 | orchestrator | Wednesday 01 April 2026 00:41:06 +0000 (0:00:00.035) 0:00:07.630 ******* 2026-04-01 00:41:07.114271 | orchestrator | =============================================================================== 2026-04-01 00:41:07.114279 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 6.21s 2026-04-01 00:41:07.114288 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.64s 2026-04-01 00:41:07.114297 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.54s 2026-04-01 00:41:07.228297 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-04-01 00:41:18.529548 | orchestrator | 2026-04-01 00:41:18 | INFO  | Prepare task for execution of wait-for-connection. 2026-04-01 00:41:18.600592 | orchestrator | 2026-04-01 00:41:18 | INFO  | Task 6c93758d-06e1-449c-9e30-07d08950dea2 (wait-for-connection) was prepared for execution. 2026-04-01 00:41:18.600667 | orchestrator | 2026-04-01 00:41:18 | INFO  | It takes a moment until task 6c93758d-06e1-449c-9e30-07d08950dea2 (wait-for-connection) has been started and output is visible here. 2026-04-01 00:41:33.307205 | orchestrator | 2026-04-01 00:41:33.307321 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-04-01 00:41:33.307339 | orchestrator | 2026-04-01 00:41:33.307351 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-04-01 00:41:33.307363 | orchestrator | Wednesday 01 April 2026 00:41:21 +0000 (0:00:00.275) 0:00:00.275 ******* 2026-04-01 00:41:33.307374 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:41:33.307386 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:41:33.307397 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:41:33.307408 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:41:33.307419 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:41:33.307430 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:41:33.307440 | orchestrator | 2026-04-01 00:41:33.307452 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 00:41:33.307464 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 00:41:33.307476 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 00:41:33.307488 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 00:41:33.307498 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 00:41:33.307509 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 00:41:33.307548 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 00:41:33.307559 | orchestrator | 2026-04-01 00:41:33.307570 | orchestrator | 2026-04-01 00:41:33.307581 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 00:41:33.307592 | orchestrator | Wednesday 01 April 2026 00:41:33 +0000 (0:00:11.583) 0:00:11.859 ******* 2026-04-01 00:41:33.307603 | orchestrator | =============================================================================== 2026-04-01 00:41:33.307614 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.58s 2026-04-01 00:41:33.435202 | orchestrator | + osism apply hddtemp 2026-04-01 00:41:44.610410 | orchestrator | 2026-04-01 00:41:44 | INFO  | Prepare task for execution of hddtemp. 2026-04-01 00:41:44.678160 | orchestrator | 2026-04-01 00:41:44 | INFO  | Task fcbbbaa9-ad97-45a7-a7c1-ca55bc7b51d5 (hddtemp) was prepared for execution. 2026-04-01 00:41:44.678258 | orchestrator | 2026-04-01 00:41:44 | INFO  | It takes a moment until task fcbbbaa9-ad97-45a7-a7c1-ca55bc7b51d5 (hddtemp) has been started and output is visible here. 2026-04-01 00:42:13.679306 | orchestrator | 2026-04-01 00:42:13.679426 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-04-01 00:42:13.679438 | orchestrator | 2026-04-01 00:42:13.679447 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-04-01 00:42:13.679456 | orchestrator | Wednesday 01 April 2026 00:41:47 +0000 (0:00:00.296) 0:00:00.296 ******* 2026-04-01 00:42:13.679464 | orchestrator | ok: [testbed-manager] 2026-04-01 00:42:13.679474 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:42:13.679481 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:42:13.679489 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:42:13.679496 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:42:13.679504 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:42:13.679511 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:42:13.679518 | orchestrator | 2026-04-01 00:42:13.679526 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-04-01 00:42:13.679534 | orchestrator | Wednesday 01 April 2026 00:41:48 +0000 (0:00:00.546) 0:00:00.842 ******* 2026-04-01 00:42:13.679543 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:42:13.679553 | orchestrator | 2026-04-01 00:42:13.679561 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-04-01 00:42:13.679568 | orchestrator | Wednesday 01 April 2026 00:41:49 +0000 (0:00:01.065) 0:00:01.907 ******* 2026-04-01 00:42:13.679576 | orchestrator | ok: [testbed-manager] 2026-04-01 00:42:13.679583 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:42:13.679590 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:42:13.679598 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:42:13.679605 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:42:13.679612 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:42:13.679619 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:42:13.679626 | orchestrator | 2026-04-01 00:42:13.679634 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-04-01 00:42:13.679641 | orchestrator | Wednesday 01 April 2026 00:41:51 +0000 (0:00:02.613) 0:00:04.520 ******* 2026-04-01 00:42:13.679649 | orchestrator | changed: [testbed-manager] 2026-04-01 00:42:13.679659 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:42:13.679666 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:42:13.679674 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:42:13.679681 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:42:13.679688 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:42:13.679696 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:42:13.679703 | orchestrator | 2026-04-01 00:42:13.679711 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-04-01 00:42:13.679741 | orchestrator | Wednesday 01 April 2026 00:41:52 +0000 (0:00:00.944) 0:00:05.465 ******* 2026-04-01 00:42:13.679749 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:42:13.679756 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:42:13.679764 | orchestrator | ok: [testbed-manager] 2026-04-01 00:42:13.679771 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:42:13.679778 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:42:13.679785 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:42:13.679792 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:42:13.679799 | orchestrator | 2026-04-01 00:42:13.679807 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-04-01 00:42:13.679814 | orchestrator | Wednesday 01 April 2026 00:41:54 +0000 (0:00:01.889) 0:00:07.354 ******* 2026-04-01 00:42:13.679822 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:42:13.679829 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:42:13.679852 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:42:13.679861 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:42:13.679869 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:42:13.679877 | orchestrator | changed: [testbed-manager] 2026-04-01 00:42:13.679886 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:42:13.679894 | orchestrator | 2026-04-01 00:42:13.679902 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-04-01 00:42:13.679910 | orchestrator | Wednesday 01 April 2026 00:41:55 +0000 (0:00:00.586) 0:00:07.941 ******* 2026-04-01 00:42:13.679976 | orchestrator | changed: [testbed-manager] 2026-04-01 00:42:13.679985 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:42:13.679993 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:42:13.680002 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:42:13.680010 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:42:13.680018 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:42:13.680026 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:42:13.680034 | orchestrator | 2026-04-01 00:42:13.680042 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-04-01 00:42:13.680050 | orchestrator | Wednesday 01 April 2026 00:42:10 +0000 (0:00:14.913) 0:00:22.854 ******* 2026-04-01 00:42:13.680059 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:42:13.680068 | orchestrator | 2026-04-01 00:42:13.680077 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-04-01 00:42:13.680085 | orchestrator | Wednesday 01 April 2026 00:42:11 +0000 (0:00:01.172) 0:00:24.027 ******* 2026-04-01 00:42:13.680093 | orchestrator | changed: [testbed-manager] 2026-04-01 00:42:13.680101 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:42:13.680109 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:42:13.680117 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:42:13.680125 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:42:13.680134 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:42:13.680142 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:42:13.680150 | orchestrator | 2026-04-01 00:42:13.680158 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 00:42:13.680168 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 00:42:13.680196 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-01 00:42:13.680205 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-01 00:42:13.680213 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-01 00:42:13.680228 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-01 00:42:13.680235 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-01 00:42:13.680242 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-01 00:42:13.680249 | orchestrator | 2026-04-01 00:42:13.680256 | orchestrator | 2026-04-01 00:42:13.680264 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 00:42:13.680271 | orchestrator | Wednesday 01 April 2026 00:42:13 +0000 (0:00:01.952) 0:00:25.980 ******* 2026-04-01 00:42:13.680278 | orchestrator | =============================================================================== 2026-04-01 00:42:13.680285 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 14.91s 2026-04-01 00:42:13.680293 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.61s 2026-04-01 00:42:13.680300 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.95s 2026-04-01 00:42:13.680307 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.89s 2026-04-01 00:42:13.680314 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.17s 2026-04-01 00:42:13.680321 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.07s 2026-04-01 00:42:13.680328 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 0.94s 2026-04-01 00:42:13.680335 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.59s 2026-04-01 00:42:13.680343 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.55s 2026-04-01 00:42:13.857410 | orchestrator | ++ semver 10.0.0 7.1.1 2026-04-01 00:42:13.908871 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-01 00:42:13.909017 | orchestrator | + sudo systemctl restart manager.service 2026-04-01 00:42:31.342655 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-04-01 00:42:31.342767 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-04-01 00:42:31.342785 | orchestrator | + local max_attempts=60 2026-04-01 00:42:31.342798 | orchestrator | + local name=ceph-ansible 2026-04-01 00:42:31.342810 | orchestrator | + local attempt_num=1 2026-04-01 00:42:31.342821 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-01 00:42:31.380166 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-01 00:42:31.380250 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-01 00:42:31.380265 | orchestrator | + sleep 5 2026-04-01 00:42:36.386475 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-01 00:42:36.404698 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-01 00:42:36.404810 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-01 00:42:36.404833 | orchestrator | + sleep 5 2026-04-01 00:42:41.407380 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-01 00:42:41.441523 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-01 00:42:41.441623 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-01 00:42:41.441640 | orchestrator | + sleep 5 2026-04-01 00:42:46.445083 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-01 00:42:46.478807 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-01 00:42:46.478942 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-01 00:42:46.478960 | orchestrator | + sleep 5 2026-04-01 00:42:51.482067 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-01 00:42:51.517672 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-01 00:42:51.517766 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-01 00:42:51.517780 | orchestrator | + sleep 5 2026-04-01 00:42:56.522980 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-01 00:42:56.556169 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-01 00:42:56.556252 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-01 00:42:56.556260 | orchestrator | + sleep 5 2026-04-01 00:43:01.560236 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-01 00:43:01.597280 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-01 00:43:01.597383 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-01 00:43:01.597400 | orchestrator | + sleep 5 2026-04-01 00:43:06.600516 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-01 00:43:06.652630 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-01 00:43:06.652698 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-01 00:43:06.652704 | orchestrator | + sleep 5 2026-04-01 00:43:11.655702 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-01 00:43:11.690652 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-01 00:43:11.690745 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-01 00:43:11.690756 | orchestrator | + sleep 5 2026-04-01 00:43:16.696185 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-01 00:43:16.734314 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-01 00:43:16.734413 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-01 00:43:16.734428 | orchestrator | + sleep 5 2026-04-01 00:43:21.738000 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-01 00:43:21.774128 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-01 00:43:21.774212 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-01 00:43:21.774222 | orchestrator | + sleep 5 2026-04-01 00:43:26.779316 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-01 00:43:26.817463 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-01 00:43:26.817566 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-01 00:43:26.817583 | orchestrator | + sleep 5 2026-04-01 00:43:31.821976 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-01 00:43:31.861940 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-01 00:43:31.862048 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-01 00:43:31.862059 | orchestrator | + sleep 5 2026-04-01 00:43:36.867151 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-01 00:43:36.896586 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-01 00:43:36.896679 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-04-01 00:43:36.896694 | orchestrator | + local max_attempts=60 2026-04-01 00:43:36.896755 | orchestrator | + local name=kolla-ansible 2026-04-01 00:43:36.896917 | orchestrator | + local attempt_num=1 2026-04-01 00:43:36.896940 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-04-01 00:43:36.922961 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-01 00:43:36.923048 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-04-01 00:43:36.923062 | orchestrator | + local max_attempts=60 2026-04-01 00:43:36.923074 | orchestrator | + local name=osism-ansible 2026-04-01 00:43:36.923085 | orchestrator | + local attempt_num=1 2026-04-01 00:43:36.923497 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-04-01 00:43:36.961378 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-01 00:43:36.961476 | orchestrator | + [[ true == \t\r\u\e ]] 2026-04-01 00:43:36.961496 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-04-01 00:43:37.107076 | orchestrator | ARA in ceph-ansible already disabled. 2026-04-01 00:43:37.247446 | orchestrator | ARA in kolla-ansible already disabled. 2026-04-01 00:43:37.380653 | orchestrator | ARA in osism-ansible already disabled. 2026-04-01 00:43:37.522428 | orchestrator | ARA in osism-kubernetes already disabled. 2026-04-01 00:43:37.522536 | orchestrator | + osism apply gather-facts 2026-04-01 00:43:48.909818 | orchestrator | 2026-04-01 00:43:48 | INFO  | Prepare task for execution of gather-facts. 2026-04-01 00:43:48.979574 | orchestrator | 2026-04-01 00:43:48 | INFO  | Task 51b94459-93ab-4422-ba84-a4d0d7f0090b (gather-facts) was prepared for execution. 2026-04-01 00:43:48.979630 | orchestrator | 2026-04-01 00:43:48 | INFO  | It takes a moment until task 51b94459-93ab-4422-ba84-a4d0d7f0090b (gather-facts) has been started and output is visible here. 2026-04-01 00:43:52.376364 | orchestrator | [WARNING]: Invalid characters were found in group names but not replaced, use 2026-04-01 00:43:52.376462 | orchestrator | -vvvv to see details 2026-04-01 00:43:52.376479 | orchestrator | 2026-04-01 00:43:52.376490 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-01 00:43:52.376531 | orchestrator | 2026-04-01 00:43:52.376544 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-01 00:43:52.376557 | orchestrator | fatal: [testbed-node-1]: UNREACHABLE! => {"changed": false, "msg": "Data could not be sent to remote host \"192.168.16.11\". Make sure this host can be reached over ssh: no such identity: /ansible/secrets/id_rsa: No such file or directory\r\ndragon@192.168.16.11: Permission denied (publickey).\r\n", "unreachable": true} 2026-04-01 00:43:52.376569 | orchestrator | ...ignoring 2026-04-01 00:43:52.376580 | orchestrator | fatal: [testbed-node-2]: UNREACHABLE! => {"changed": false, "msg": "Data could not be sent to remote host \"192.168.16.12\". Make sure this host can be reached over ssh: no such identity: /ansible/secrets/id_rsa: No such file or directory\r\ndragon@192.168.16.12: Permission denied (publickey).\r\n", "unreachable": true} 2026-04-01 00:43:52.376591 | orchestrator | ...ignoring 2026-04-01 00:43:52.376617 | orchestrator | fatal: [testbed-node-3]: UNREACHABLE! => {"changed": false, "msg": "Data could not be sent to remote host \"192.168.16.13\". Make sure this host can be reached over ssh: no such identity: /ansible/secrets/id_rsa: No such file or directory\r\ndragon@192.168.16.13: Permission denied (publickey).\r\n", "unreachable": true} 2026-04-01 00:43:52.376629 | orchestrator | ...ignoring 2026-04-01 00:43:52.376641 | orchestrator | fatal: [testbed-node-4]: UNREACHABLE! => {"changed": false, "msg": "Data could not be sent to remote host \"192.168.16.14\". Make sure this host can be reached over ssh: no such identity: /ansible/secrets/id_rsa: No such file or directory\r\ndragon@192.168.16.14: Permission denied (publickey).\r\n", "unreachable": true} 2026-04-01 00:43:52.376651 | orchestrator | ...ignoring 2026-04-01 00:43:52.376663 | orchestrator | fatal: [testbed-node-0]: UNREACHABLE! => {"changed": false, "msg": "Data could not be sent to remote host \"192.168.16.10\". Make sure this host can be reached over ssh: no such identity: /ansible/secrets/id_rsa: No such file or directory\r\ndragon@192.168.16.10: Permission denied (publickey).\r\n", "unreachable": true} 2026-04-01 00:43:52.376672 | orchestrator | ...ignoring 2026-04-01 00:43:52.376684 | orchestrator | fatal: [testbed-node-5]: UNREACHABLE! => {"changed": false, "msg": "Data could not be sent to remote host \"192.168.16.15\". Make sure this host can be reached over ssh: no such identity: /ansible/secrets/id_rsa: No such file or directory\r\ndragon@192.168.16.15: Permission denied (publickey).\r\n", "unreachable": true} 2026-04-01 00:43:52.376695 | orchestrator | ...ignoring 2026-04-01 00:43:52.376706 | orchestrator | fatal: [testbed-manager]: UNREACHABLE! => {"changed": false, "msg": "Data could not be sent to remote host \"192.168.16.5\". Make sure this host can be reached over ssh: no such identity: /ansible/secrets/id_rsa: No such file or directory\r\ndragon@192.168.16.5: Permission denied (publickey).\r\n", "unreachable": true} 2026-04-01 00:43:52.376717 | orchestrator | ...ignoring 2026-04-01 00:43:52.376729 | orchestrator | 2026-04-01 00:43:52.376739 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-01 00:43:52.376750 | orchestrator | 2026-04-01 00:43:52.376761 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-01 00:43:52.376771 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:43:52.376784 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:43:52.376844 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:43:52.376857 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:43:52.376867 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:43:52.376877 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:43:52.376888 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:43:52.376898 | orchestrator | 2026-04-01 00:43:52.376908 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 00:43:52.376920 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-04-01 00:43:52.376945 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-04-01 00:43:52.376956 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-04-01 00:43:52.376964 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-04-01 00:43:52.376986 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-04-01 00:43:52.376994 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-04-01 00:43:52.377002 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-04-01 00:43:52.377009 | orchestrator | 2026-04-01 00:43:52.487262 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-04-01 00:43:52.497119 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-04-01 00:43:52.506069 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-04-01 00:43:52.514861 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-04-01 00:43:52.523564 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-04-01 00:43:52.532446 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/320-openstack-minimal.sh /usr/local/bin/deploy-openstack-minimal 2026-04-01 00:43:52.541241 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-04-01 00:43:52.550235 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-04-01 00:43:52.559004 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-04-01 00:43:52.567697 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade-manager.sh /usr/local/bin/upgrade-manager 2026-04-01 00:43:52.576660 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-04-01 00:43:52.595158 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-04-01 00:43:52.612629 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-04-01 00:43:52.625034 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-04-01 00:43:52.634877 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/320-openstack-minimal.sh /usr/local/bin/upgrade-openstack-minimal 2026-04-01 00:43:52.643962 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-04-01 00:43:52.652285 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-04-01 00:43:52.660162 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-04-01 00:43:52.667982 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-04-01 00:43:52.676633 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2026-04-01 00:43:52.684774 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-04-01 00:43:52.692467 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-04-01 00:43:52.700374 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-04-01 00:43:52.710247 | orchestrator | + [[ false == \t\r\u\e ]] 2026-04-01 00:43:53.032731 | orchestrator | ok: Runtime: 0:24:06.999612 2026-04-01 00:43:53.165806 | 2026-04-01 00:43:53.165965 | TASK [Deploy services] 2026-04-01 00:43:53.699791 | orchestrator | skipping: Conditional result was False 2026-04-01 00:43:53.718104 | 2026-04-01 00:43:53.718288 | TASK [Deploy in a nutshell] 2026-04-01 00:43:54.418764 | orchestrator | + set -e 2026-04-01 00:43:54.418899 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-01 00:43:54.418911 | orchestrator | ++ export INTERACTIVE=false 2026-04-01 00:43:54.418920 | orchestrator | ++ INTERACTIVE=false 2026-04-01 00:43:54.418925 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-01 00:43:54.418929 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-01 00:43:54.418944 | orchestrator | + source /opt/manager-vars.sh 2026-04-01 00:43:54.418983 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-01 00:43:54.418996 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-01 00:43:54.419006 | orchestrator | ++ export CEPH_VERSION= 2026-04-01 00:43:54.419012 | orchestrator | ++ CEPH_VERSION= 2026-04-01 00:43:54.419016 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-01 00:43:54.419023 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-01 00:43:54.419027 | orchestrator | ++ export MANAGER_VERSION=10.0.0 2026-04-01 00:43:54.419036 | orchestrator | ++ MANAGER_VERSION=10.0.0 2026-04-01 00:43:54.419040 | orchestrator | ++ export OPENSTACK_VERSION= 2026-04-01 00:43:54.419189 | orchestrator | 2026-04-01 00:43:54.419196 | orchestrator | # PULL IMAGES 2026-04-01 00:43:54.419201 | orchestrator | 2026-04-01 00:43:54.419208 | orchestrator | ++ OPENSTACK_VERSION= 2026-04-01 00:43:54.419212 | orchestrator | ++ export ARA=false 2026-04-01 00:43:54.419216 | orchestrator | ++ ARA=false 2026-04-01 00:43:54.419220 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-01 00:43:54.419224 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-01 00:43:54.419228 | orchestrator | ++ export TEMPEST=true 2026-04-01 00:43:54.419232 | orchestrator | ++ TEMPEST=true 2026-04-01 00:43:54.419236 | orchestrator | ++ export IS_ZUUL=true 2026-04-01 00:43:54.419240 | orchestrator | ++ IS_ZUUL=true 2026-04-01 00:43:54.419243 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.126 2026-04-01 00:43:54.419248 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.126 2026-04-01 00:43:54.419252 | orchestrator | ++ export EXTERNAL_API=false 2026-04-01 00:43:54.419255 | orchestrator | ++ EXTERNAL_API=false 2026-04-01 00:43:54.419259 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-01 00:43:54.419263 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-01 00:43:54.419267 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-01 00:43:54.419271 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-01 00:43:54.419275 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-01 00:43:54.419279 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-01 00:43:54.419283 | orchestrator | + echo 2026-04-01 00:43:54.419289 | orchestrator | + echo '# PULL IMAGES' 2026-04-01 00:43:54.419293 | orchestrator | + echo 2026-04-01 00:43:54.420130 | orchestrator | ++ semver 10.0.0 7.0.0 2026-04-01 00:43:54.474096 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-01 00:43:54.474203 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-04-01 00:43:55.558257 | orchestrator | 2026-04-01 00:43:55 | INFO  | Trying to run play pull-images in environment custom 2026-04-01 00:44:05.708740 | orchestrator | 2026-04-01 00:44:05 | INFO  | Prepare task for execution of pull-images. 2026-04-01 00:44:05.780374 | orchestrator | 2026-04-01 00:44:05 | INFO  | Task 8cc96fd7-36ba-4668-b64c-e288ec793c01 (pull-images) was prepared for execution. 2026-04-01 00:44:05.780464 | orchestrator | 2026-04-01 00:44:05 | INFO  | Task 8cc96fd7-36ba-4668-b64c-e288ec793c01 is running in background. No more output. Check ARA for logs. 2026-04-01 00:44:07.104260 | orchestrator | 2026-04-01 00:44:07 | INFO  | Trying to run play wipe-partitions in environment custom 2026-04-01 00:44:17.183406 | orchestrator | 2026-04-01 00:44:17 | INFO  | Prepare task for execution of wipe-partitions. 2026-04-01 00:44:17.256382 | orchestrator | 2026-04-01 00:44:17 | INFO  | Task c5be9525-a687-4e79-8314-baff9cc0a98e (wipe-partitions) was prepared for execution. 2026-04-01 00:44:17.256518 | orchestrator | 2026-04-01 00:44:17 | INFO  | It takes a moment until task c5be9525-a687-4e79-8314-baff9cc0a98e (wipe-partitions) has been started and output is visible here. 2026-04-01 00:44:29.218702 | orchestrator | 2026-04-01 00:44:29.218845 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-04-01 00:44:29.218864 | orchestrator | 2026-04-01 00:44:29.218873 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-04-01 00:44:29.218886 | orchestrator | Wednesday 01 April 2026 00:44:20 +0000 (0:00:00.149) 0:00:00.149 ******* 2026-04-01 00:44:29.218897 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:44:29.218931 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:44:29.218940 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:44:29.218949 | orchestrator | 2026-04-01 00:44:29.218957 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-04-01 00:44:29.218967 | orchestrator | Wednesday 01 April 2026 00:44:21 +0000 (0:00:01.218) 0:00:01.368 ******* 2026-04-01 00:44:29.218975 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:44:29.218988 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:44:29.218998 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:44:29.219007 | orchestrator | 2026-04-01 00:44:29.219016 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-04-01 00:44:29.219024 | orchestrator | Wednesday 01 April 2026 00:44:21 +0000 (0:00:00.288) 0:00:01.656 ******* 2026-04-01 00:44:29.219033 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:44:29.219043 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:44:29.219051 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:44:29.219059 | orchestrator | 2026-04-01 00:44:29.219069 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-04-01 00:44:29.219077 | orchestrator | Wednesday 01 April 2026 00:44:22 +0000 (0:00:00.591) 0:00:02.247 ******* 2026-04-01 00:44:29.219085 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:44:29.219093 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:44:29.219101 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:44:29.219110 | orchestrator | 2026-04-01 00:44:29.219119 | orchestrator | TASK [Check device availability] *********************************************** 2026-04-01 00:44:29.219128 | orchestrator | Wednesday 01 April 2026 00:44:22 +0000 (0:00:00.220) 0:00:02.468 ******* 2026-04-01 00:44:29.219137 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-04-01 00:44:29.219149 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-04-01 00:44:29.219159 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-04-01 00:44:29.219168 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-04-01 00:44:29.219177 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-04-01 00:44:29.219186 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-04-01 00:44:29.219195 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-04-01 00:44:29.219204 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-04-01 00:44:29.219213 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-04-01 00:44:29.219223 | orchestrator | 2026-04-01 00:44:29.219233 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-04-01 00:44:29.219243 | orchestrator | Wednesday 01 April 2026 00:44:23 +0000 (0:00:01.376) 0:00:03.845 ******* 2026-04-01 00:44:29.219252 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-04-01 00:44:29.219261 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-04-01 00:44:29.219269 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-04-01 00:44:29.219278 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-04-01 00:44:29.219286 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-04-01 00:44:29.219295 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-04-01 00:44:29.219304 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-04-01 00:44:29.219312 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-04-01 00:44:29.219320 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-04-01 00:44:29.219329 | orchestrator | 2026-04-01 00:44:29.219338 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-04-01 00:44:29.219347 | orchestrator | Wednesday 01 April 2026 00:44:25 +0000 (0:00:01.499) 0:00:05.345 ******* 2026-04-01 00:44:29.219356 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-04-01 00:44:29.219367 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-04-01 00:44:29.219377 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-04-01 00:44:29.219386 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-04-01 00:44:29.219395 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-04-01 00:44:29.219421 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-04-01 00:44:29.219431 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-04-01 00:44:29.219440 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-04-01 00:44:29.219448 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-04-01 00:44:29.219457 | orchestrator | 2026-04-01 00:44:29.219465 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-04-01 00:44:29.219474 | orchestrator | Wednesday 01 April 2026 00:44:27 +0000 (0:00:02.298) 0:00:07.643 ******* 2026-04-01 00:44:29.219482 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:44:29.219491 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:44:29.219500 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:44:29.219509 | orchestrator | 2026-04-01 00:44:29.219518 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-04-01 00:44:29.219528 | orchestrator | Wednesday 01 April 2026 00:44:28 +0000 (0:00:00.662) 0:00:08.305 ******* 2026-04-01 00:44:29.219538 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:44:29.219547 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:44:29.219555 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:44:29.219565 | orchestrator | 2026-04-01 00:44:29.219574 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 00:44:29.219585 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-01 00:44:29.219596 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-01 00:44:29.219625 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-01 00:44:29.219634 | orchestrator | 2026-04-01 00:44:29.219644 | orchestrator | 2026-04-01 00:44:29.219652 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 00:44:29.219661 | orchestrator | Wednesday 01 April 2026 00:44:28 +0000 (0:00:00.705) 0:00:09.011 ******* 2026-04-01 00:44:29.219669 | orchestrator | =============================================================================== 2026-04-01 00:44:29.219678 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.30s 2026-04-01 00:44:29.219687 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.50s 2026-04-01 00:44:29.219697 | orchestrator | Check device availability ----------------------------------------------- 1.38s 2026-04-01 00:44:29.219705 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 1.22s 2026-04-01 00:44:29.219715 | orchestrator | Request device events from the kernel ----------------------------------- 0.71s 2026-04-01 00:44:29.219724 | orchestrator | Reload udev rules ------------------------------------------------------- 0.66s 2026-04-01 00:44:29.219748 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.59s 2026-04-01 00:44:29.219787 | orchestrator | Remove all rook related logical devices --------------------------------- 0.29s 2026-04-01 00:44:29.219796 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.22s 2026-04-01 00:44:40.522793 | orchestrator | 2026-04-01 00:44:40 | INFO  | Prepare task for execution of facts. 2026-04-01 00:44:40.600913 | orchestrator | 2026-04-01 00:44:40 | INFO  | Task 2a3373c2-bfb8-41b7-b13f-b8be1c2b189f (facts) was prepared for execution. 2026-04-01 00:44:40.600965 | orchestrator | 2026-04-01 00:44:40 | INFO  | It takes a moment until task 2a3373c2-bfb8-41b7-b13f-b8be1c2b189f (facts) has been started and output is visible here. 2026-04-01 00:44:53.535277 | orchestrator | 2026-04-01 00:44:53.535373 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-04-01 00:44:53.535384 | orchestrator | 2026-04-01 00:44:53.535392 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-01 00:44:53.535422 | orchestrator | Wednesday 01 April 2026 00:44:43 +0000 (0:00:00.335) 0:00:00.335 ******* 2026-04-01 00:44:53.535429 | orchestrator | ok: [testbed-manager] 2026-04-01 00:44:53.535437 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:44:53.535443 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:44:53.535450 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:44:53.535515 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:44:53.535523 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:44:53.535529 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:44:53.535537 | orchestrator | 2026-04-01 00:44:53.535543 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-01 00:44:53.535568 | orchestrator | Wednesday 01 April 2026 00:44:45 +0000 (0:00:01.333) 0:00:01.669 ******* 2026-04-01 00:44:53.535575 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:44:53.535583 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:44:53.535590 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:44:53.535597 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:44:53.535605 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:44:53.535612 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:44:53.535618 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:44:53.535625 | orchestrator | 2026-04-01 00:44:53.535632 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-01 00:44:53.535639 | orchestrator | 2026-04-01 00:44:53.535645 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-01 00:44:53.535655 | orchestrator | Wednesday 01 April 2026 00:44:46 +0000 (0:00:01.155) 0:00:02.825 ******* 2026-04-01 00:44:53.535663 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:44:53.535669 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:44:53.535676 | orchestrator | ok: [testbed-manager] 2026-04-01 00:44:53.535682 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:44:53.535687 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:44:53.535693 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:44:53.535699 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:44:53.535704 | orchestrator | 2026-04-01 00:44:53.535710 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-01 00:44:53.535715 | orchestrator | 2026-04-01 00:44:53.535722 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-01 00:44:53.535792 | orchestrator | Wednesday 01 April 2026 00:44:52 +0000 (0:00:06.612) 0:00:09.437 ******* 2026-04-01 00:44:53.535802 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:44:53.535809 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:44:53.535816 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:44:53.535821 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:44:53.535827 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:44:53.535833 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:44:53.535838 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:44:53.535845 | orchestrator | 2026-04-01 00:44:53.535853 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 00:44:53.535861 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-01 00:44:53.535870 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-01 00:44:53.535878 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-01 00:44:53.535886 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-01 00:44:53.535893 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-01 00:44:53.535901 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-01 00:44:53.535946 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-01 00:44:53.535954 | orchestrator | 2026-04-01 00:44:53.535961 | orchestrator | 2026-04-01 00:44:53.535968 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 00:44:53.535982 | orchestrator | Wednesday 01 April 2026 00:44:53 +0000 (0:00:00.498) 0:00:09.936 ******* 2026-04-01 00:44:53.535995 | orchestrator | =============================================================================== 2026-04-01 00:44:53.536059 | orchestrator | Gathers facts about hosts ----------------------------------------------- 6.61s 2026-04-01 00:44:53.536068 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.33s 2026-04-01 00:44:53.536075 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.16s 2026-04-01 00:44:53.536082 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.50s 2026-04-01 00:44:54.800554 | orchestrator | 2026-04-01 00:44:54 | INFO  | Prepare task for execution of ceph-configure-lvm-volumes. 2026-04-01 00:44:54.864293 | orchestrator | 2026-04-01 00:44:54 | INFO  | Task 133ef789-058b-4f84-8a8e-60800fe385db (ceph-configure-lvm-volumes) was prepared for execution. 2026-04-01 00:44:54.864375 | orchestrator | 2026-04-01 00:44:54 | INFO  | It takes a moment until task 133ef789-058b-4f84-8a8e-60800fe385db (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-04-01 00:45:06.361913 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-01 00:45:06.362000 | orchestrator | 2.16.14 2026-04-01 00:45:06.362072 | orchestrator | 2026-04-01 00:45:06.362081 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-04-01 00:45:06.362089 | orchestrator | 2026-04-01 00:45:06.362096 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-01 00:45:06.362113 | orchestrator | Wednesday 01 April 2026 00:44:59 +0000 (0:00:00.314) 0:00:00.314 ******* 2026-04-01 00:45:06.362120 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-01 00:45:06.362127 | orchestrator | 2026-04-01 00:45:06.362133 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-01 00:45:06.362139 | orchestrator | Wednesday 01 April 2026 00:44:59 +0000 (0:00:00.232) 0:00:00.547 ******* 2026-04-01 00:45:06.362145 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:45:06.362151 | orchestrator | 2026-04-01 00:45:06.362157 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:45:06.362163 | orchestrator | Wednesday 01 April 2026 00:44:59 +0000 (0:00:00.226) 0:00:00.773 ******* 2026-04-01 00:45:06.362170 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-04-01 00:45:06.362177 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-04-01 00:45:06.362183 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-04-01 00:45:06.362189 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-04-01 00:45:06.362195 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-04-01 00:45:06.362202 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-04-01 00:45:06.362208 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-04-01 00:45:06.362213 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-04-01 00:45:06.362220 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-04-01 00:45:06.362226 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-04-01 00:45:06.362232 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-04-01 00:45:06.362258 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-04-01 00:45:06.362264 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-04-01 00:45:06.362270 | orchestrator | 2026-04-01 00:45:06.362276 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:45:06.362282 | orchestrator | Wednesday 01 April 2026 00:44:59 +0000 (0:00:00.366) 0:00:01.140 ******* 2026-04-01 00:45:06.362288 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:45:06.362294 | orchestrator | 2026-04-01 00:45:06.362300 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:45:06.362307 | orchestrator | Wednesday 01 April 2026 00:45:00 +0000 (0:00:00.467) 0:00:01.607 ******* 2026-04-01 00:45:06.362312 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:45:06.362318 | orchestrator | 2026-04-01 00:45:06.362323 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:45:06.362329 | orchestrator | Wednesday 01 April 2026 00:45:00 +0000 (0:00:00.188) 0:00:01.795 ******* 2026-04-01 00:45:06.362338 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:45:06.362345 | orchestrator | 2026-04-01 00:45:06.362351 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:45:06.362357 | orchestrator | Wednesday 01 April 2026 00:45:00 +0000 (0:00:00.178) 0:00:01.974 ******* 2026-04-01 00:45:06.362364 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:45:06.362370 | orchestrator | 2026-04-01 00:45:06.362377 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:45:06.362384 | orchestrator | Wednesday 01 April 2026 00:45:00 +0000 (0:00:00.214) 0:00:02.189 ******* 2026-04-01 00:45:06.362392 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:45:06.362399 | orchestrator | 2026-04-01 00:45:06.362406 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:45:06.362412 | orchestrator | Wednesday 01 April 2026 00:45:01 +0000 (0:00:00.189) 0:00:02.378 ******* 2026-04-01 00:45:06.362418 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:45:06.362425 | orchestrator | 2026-04-01 00:45:06.362432 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:45:06.362439 | orchestrator | Wednesday 01 April 2026 00:45:01 +0000 (0:00:00.193) 0:00:02.572 ******* 2026-04-01 00:45:06.362446 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:45:06.362453 | orchestrator | 2026-04-01 00:45:06.362461 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:45:06.362467 | orchestrator | Wednesday 01 April 2026 00:45:01 +0000 (0:00:00.192) 0:00:02.764 ******* 2026-04-01 00:45:06.362474 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:45:06.362481 | orchestrator | 2026-04-01 00:45:06.362489 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:45:06.362496 | orchestrator | Wednesday 01 April 2026 00:45:01 +0000 (0:00:00.195) 0:00:02.959 ******* 2026-04-01 00:45:06.362503 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_2a4f4914-3be5-4bda-a47c-01d9519cb486) 2026-04-01 00:45:06.362511 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_2a4f4914-3be5-4bda-a47c-01d9519cb486) 2026-04-01 00:45:06.362520 | orchestrator | 2026-04-01 00:45:06.362527 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:45:06.362551 | orchestrator | Wednesday 01 April 2026 00:45:02 +0000 (0:00:00.431) 0:00:03.391 ******* 2026-04-01 00:45:06.362558 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_181eb0d3-49bd-41f1-8f26-95e9754c9896) 2026-04-01 00:45:06.362564 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_181eb0d3-49bd-41f1-8f26-95e9754c9896) 2026-04-01 00:45:06.362570 | orchestrator | 2026-04-01 00:45:06.362577 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:45:06.362583 | orchestrator | Wednesday 01 April 2026 00:45:02 +0000 (0:00:00.413) 0:00:03.804 ******* 2026-04-01 00:45:06.362597 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_91aabfbd-d205-4d26-bb68-6c75b4d02402) 2026-04-01 00:45:06.362603 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_91aabfbd-d205-4d26-bb68-6c75b4d02402) 2026-04-01 00:45:06.362610 | orchestrator | 2026-04-01 00:45:06.362632 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:45:06.362645 | orchestrator | Wednesday 01 April 2026 00:45:03 +0000 (0:00:00.738) 0:00:04.542 ******* 2026-04-01 00:45:06.362651 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_a922e28e-6911-40ed-8ea7-c2624142d8a1) 2026-04-01 00:45:06.362658 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_a922e28e-6911-40ed-8ea7-c2624142d8a1) 2026-04-01 00:45:06.362664 | orchestrator | 2026-04-01 00:45:06.362670 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:45:06.362677 | orchestrator | Wednesday 01 April 2026 00:45:03 +0000 (0:00:00.626) 0:00:05.168 ******* 2026-04-01 00:45:06.362683 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-01 00:45:06.362689 | orchestrator | 2026-04-01 00:45:06.362695 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:45:06.362701 | orchestrator | Wednesday 01 April 2026 00:45:04 +0000 (0:00:00.698) 0:00:05.867 ******* 2026-04-01 00:45:06.362707 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-04-01 00:45:06.362770 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-04-01 00:45:06.362778 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-04-01 00:45:06.362785 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-04-01 00:45:06.362791 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-04-01 00:45:06.362798 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-04-01 00:45:06.362803 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-04-01 00:45:06.362810 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-04-01 00:45:06.362815 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-04-01 00:45:06.362822 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-04-01 00:45:06.362829 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-04-01 00:45:06.362835 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-04-01 00:45:06.362841 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-04-01 00:45:06.362848 | orchestrator | 2026-04-01 00:45:06.362854 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:45:06.362860 | orchestrator | Wednesday 01 April 2026 00:45:04 +0000 (0:00:00.364) 0:00:06.231 ******* 2026-04-01 00:45:06.362873 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:45:06.362880 | orchestrator | 2026-04-01 00:45:06.362886 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:45:06.362892 | orchestrator | Wednesday 01 April 2026 00:45:05 +0000 (0:00:00.209) 0:00:06.440 ******* 2026-04-01 00:45:06.362899 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:45:06.362905 | orchestrator | 2026-04-01 00:45:06.362911 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:45:06.362917 | orchestrator | Wednesday 01 April 2026 00:45:05 +0000 (0:00:00.192) 0:00:06.633 ******* 2026-04-01 00:45:06.362923 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:45:06.362929 | orchestrator | 2026-04-01 00:45:06.362936 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:45:06.362950 | orchestrator | Wednesday 01 April 2026 00:45:05 +0000 (0:00:00.200) 0:00:06.833 ******* 2026-04-01 00:45:06.362956 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:45:06.362963 | orchestrator | 2026-04-01 00:45:06.362969 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:45:06.362975 | orchestrator | Wednesday 01 April 2026 00:45:05 +0000 (0:00:00.190) 0:00:07.024 ******* 2026-04-01 00:45:06.362982 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:45:06.362988 | orchestrator | 2026-04-01 00:45:06.362995 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:45:06.363006 | orchestrator | Wednesday 01 April 2026 00:45:05 +0000 (0:00:00.193) 0:00:07.218 ******* 2026-04-01 00:45:06.363013 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:45:06.363020 | orchestrator | 2026-04-01 00:45:06.363026 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:45:06.363033 | orchestrator | Wednesday 01 April 2026 00:45:06 +0000 (0:00:00.201) 0:00:07.420 ******* 2026-04-01 00:45:06.363039 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:45:06.363046 | orchestrator | 2026-04-01 00:45:06.363061 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:45:13.486906 | orchestrator | Wednesday 01 April 2026 00:45:06 +0000 (0:00:00.187) 0:00:07.607 ******* 2026-04-01 00:45:13.486972 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:45:13.486984 | orchestrator | 2026-04-01 00:45:13.486991 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:45:13.486997 | orchestrator | Wednesday 01 April 2026 00:45:06 +0000 (0:00:00.200) 0:00:07.808 ******* 2026-04-01 00:45:13.487003 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-04-01 00:45:13.487009 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-04-01 00:45:13.487015 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-04-01 00:45:13.487021 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-04-01 00:45:13.487026 | orchestrator | 2026-04-01 00:45:13.487032 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:45:13.487038 | orchestrator | Wednesday 01 April 2026 00:45:07 +0000 (0:00:00.879) 0:00:08.687 ******* 2026-04-01 00:45:13.487043 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:45:13.487049 | orchestrator | 2026-04-01 00:45:13.487055 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:45:13.487060 | orchestrator | Wednesday 01 April 2026 00:45:07 +0000 (0:00:00.175) 0:00:08.863 ******* 2026-04-01 00:45:13.487066 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:45:13.487072 | orchestrator | 2026-04-01 00:45:13.487078 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:45:13.487084 | orchestrator | Wednesday 01 April 2026 00:45:07 +0000 (0:00:00.206) 0:00:09.070 ******* 2026-04-01 00:45:13.487090 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:45:13.487096 | orchestrator | 2026-04-01 00:45:13.487102 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:45:13.487108 | orchestrator | Wednesday 01 April 2026 00:45:08 +0000 (0:00:00.192) 0:00:09.262 ******* 2026-04-01 00:45:13.487113 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:45:13.487119 | orchestrator | 2026-04-01 00:45:13.487125 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-04-01 00:45:13.487131 | orchestrator | Wednesday 01 April 2026 00:45:08 +0000 (0:00:00.189) 0:00:09.452 ******* 2026-04-01 00:45:13.487137 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-04-01 00:45:13.487143 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-04-01 00:45:13.487149 | orchestrator | 2026-04-01 00:45:13.487154 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-04-01 00:45:13.487160 | orchestrator | Wednesday 01 April 2026 00:45:08 +0000 (0:00:00.165) 0:00:09.617 ******* 2026-04-01 00:45:13.487166 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:45:13.487186 | orchestrator | 2026-04-01 00:45:13.487192 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-04-01 00:45:13.487198 | orchestrator | Wednesday 01 April 2026 00:45:08 +0000 (0:00:00.129) 0:00:09.746 ******* 2026-04-01 00:45:13.487203 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:45:13.487209 | orchestrator | 2026-04-01 00:45:13.487214 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-04-01 00:45:13.487221 | orchestrator | Wednesday 01 April 2026 00:45:08 +0000 (0:00:00.124) 0:00:09.870 ******* 2026-04-01 00:45:13.487227 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:45:13.487232 | orchestrator | 2026-04-01 00:45:13.487238 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-04-01 00:45:13.487243 | orchestrator | Wednesday 01 April 2026 00:45:08 +0000 (0:00:00.109) 0:00:09.980 ******* 2026-04-01 00:45:13.487248 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:45:13.487254 | orchestrator | 2026-04-01 00:45:13.487259 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-04-01 00:45:13.487265 | orchestrator | Wednesday 01 April 2026 00:45:08 +0000 (0:00:00.146) 0:00:10.127 ******* 2026-04-01 00:45:13.487271 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '070a6fcd-e232-5822-bdac-2856eb469583'}}) 2026-04-01 00:45:13.487277 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '24dba708-820d-5543-af14-6cbe38251993'}}) 2026-04-01 00:45:13.487282 | orchestrator | 2026-04-01 00:45:13.487288 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-04-01 00:45:13.487293 | orchestrator | Wednesday 01 April 2026 00:45:09 +0000 (0:00:00.161) 0:00:10.289 ******* 2026-04-01 00:45:13.487299 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '070a6fcd-e232-5822-bdac-2856eb469583'}})  2026-04-01 00:45:13.487312 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '24dba708-820d-5543-af14-6cbe38251993'}})  2026-04-01 00:45:13.487318 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:45:13.487323 | orchestrator | 2026-04-01 00:45:13.487329 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-04-01 00:45:13.487334 | orchestrator | Wednesday 01 April 2026 00:45:09 +0000 (0:00:00.143) 0:00:10.432 ******* 2026-04-01 00:45:13.487340 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '070a6fcd-e232-5822-bdac-2856eb469583'}})  2026-04-01 00:45:13.487345 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '24dba708-820d-5543-af14-6cbe38251993'}})  2026-04-01 00:45:13.487350 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:45:13.487356 | orchestrator | 2026-04-01 00:45:13.487361 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-04-01 00:45:13.487367 | orchestrator | Wednesday 01 April 2026 00:45:09 +0000 (0:00:00.132) 0:00:10.565 ******* 2026-04-01 00:45:13.487372 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '070a6fcd-e232-5822-bdac-2856eb469583'}})  2026-04-01 00:45:13.487387 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '24dba708-820d-5543-af14-6cbe38251993'}})  2026-04-01 00:45:13.487393 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:45:13.487398 | orchestrator | 2026-04-01 00:45:13.487403 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-04-01 00:45:13.487409 | orchestrator | Wednesday 01 April 2026 00:45:09 +0000 (0:00:00.282) 0:00:10.848 ******* 2026-04-01 00:45:13.487414 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:45:13.487419 | orchestrator | 2026-04-01 00:45:13.487424 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-04-01 00:45:13.487429 | orchestrator | Wednesday 01 April 2026 00:45:09 +0000 (0:00:00.135) 0:00:10.983 ******* 2026-04-01 00:45:13.487434 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:45:13.487440 | orchestrator | 2026-04-01 00:45:13.487452 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-04-01 00:45:13.487457 | orchestrator | Wednesday 01 April 2026 00:45:09 +0000 (0:00:00.144) 0:00:11.127 ******* 2026-04-01 00:45:13.487463 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:45:13.487468 | orchestrator | 2026-04-01 00:45:13.487473 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-04-01 00:45:13.487485 | orchestrator | Wednesday 01 April 2026 00:45:09 +0000 (0:00:00.120) 0:00:11.248 ******* 2026-04-01 00:45:13.487491 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:45:13.487497 | orchestrator | 2026-04-01 00:45:13.487503 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-04-01 00:45:13.487509 | orchestrator | Wednesday 01 April 2026 00:45:10 +0000 (0:00:00.128) 0:00:11.376 ******* 2026-04-01 00:45:13.487515 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:45:13.487521 | orchestrator | 2026-04-01 00:45:13.487527 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-04-01 00:45:13.487533 | orchestrator | Wednesday 01 April 2026 00:45:10 +0000 (0:00:00.165) 0:00:11.542 ******* 2026-04-01 00:45:13.487538 | orchestrator | ok: [testbed-node-3] => { 2026-04-01 00:45:13.487544 | orchestrator |  "ceph_osd_devices": { 2026-04-01 00:45:13.487550 | orchestrator |  "sdb": { 2026-04-01 00:45:13.487556 | orchestrator |  "osd_lvm_uuid": "070a6fcd-e232-5822-bdac-2856eb469583" 2026-04-01 00:45:13.487562 | orchestrator |  }, 2026-04-01 00:45:13.487568 | orchestrator |  "sdc": { 2026-04-01 00:45:13.487574 | orchestrator |  "osd_lvm_uuid": "24dba708-820d-5543-af14-6cbe38251993" 2026-04-01 00:45:13.487580 | orchestrator |  } 2026-04-01 00:45:13.487586 | orchestrator |  } 2026-04-01 00:45:13.487592 | orchestrator | } 2026-04-01 00:45:13.487598 | orchestrator | 2026-04-01 00:45:13.487603 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-04-01 00:45:13.487607 | orchestrator | Wednesday 01 April 2026 00:45:10 +0000 (0:00:00.131) 0:00:11.674 ******* 2026-04-01 00:45:13.487613 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:45:13.487619 | orchestrator | 2026-04-01 00:45:13.487625 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-04-01 00:45:13.487631 | orchestrator | Wednesday 01 April 2026 00:45:10 +0000 (0:00:00.132) 0:00:11.806 ******* 2026-04-01 00:45:13.487637 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:45:13.487643 | orchestrator | 2026-04-01 00:45:13.487649 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-04-01 00:45:13.487655 | orchestrator | Wednesday 01 April 2026 00:45:10 +0000 (0:00:00.130) 0:00:11.936 ******* 2026-04-01 00:45:13.487661 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:45:13.487667 | orchestrator | 2026-04-01 00:45:13.487673 | orchestrator | TASK [Print configuration data] ************************************************ 2026-04-01 00:45:13.487679 | orchestrator | Wednesday 01 April 2026 00:45:10 +0000 (0:00:00.136) 0:00:12.073 ******* 2026-04-01 00:45:13.487685 | orchestrator | changed: [testbed-node-3] => { 2026-04-01 00:45:13.487691 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-04-01 00:45:13.487697 | orchestrator |  "ceph_osd_devices": { 2026-04-01 00:45:13.487703 | orchestrator |  "sdb": { 2026-04-01 00:45:13.487741 | orchestrator |  "osd_lvm_uuid": "070a6fcd-e232-5822-bdac-2856eb469583" 2026-04-01 00:45:13.487748 | orchestrator |  }, 2026-04-01 00:45:13.487754 | orchestrator |  "sdc": { 2026-04-01 00:45:13.487760 | orchestrator |  "osd_lvm_uuid": "24dba708-820d-5543-af14-6cbe38251993" 2026-04-01 00:45:13.487766 | orchestrator |  } 2026-04-01 00:45:13.487772 | orchestrator |  }, 2026-04-01 00:45:13.487779 | orchestrator |  "lvm_volumes": [ 2026-04-01 00:45:13.487785 | orchestrator |  { 2026-04-01 00:45:13.487791 | orchestrator |  "data": "osd-block-070a6fcd-e232-5822-bdac-2856eb469583", 2026-04-01 00:45:13.487797 | orchestrator |  "data_vg": "ceph-070a6fcd-e232-5822-bdac-2856eb469583" 2026-04-01 00:45:13.487803 | orchestrator |  }, 2026-04-01 00:45:13.487814 | orchestrator |  { 2026-04-01 00:45:13.487823 | orchestrator |  "data": "osd-block-24dba708-820d-5543-af14-6cbe38251993", 2026-04-01 00:45:13.487834 | orchestrator |  "data_vg": "ceph-24dba708-820d-5543-af14-6cbe38251993" 2026-04-01 00:45:13.487844 | orchestrator |  } 2026-04-01 00:45:13.487850 | orchestrator |  ] 2026-04-01 00:45:13.487855 | orchestrator |  } 2026-04-01 00:45:13.487860 | orchestrator | } 2026-04-01 00:45:13.487865 | orchestrator | 2026-04-01 00:45:13.487870 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-04-01 00:45:13.487875 | orchestrator | Wednesday 01 April 2026 00:45:11 +0000 (0:00:00.188) 0:00:12.262 ******* 2026-04-01 00:45:13.487880 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-01 00:45:13.487885 | orchestrator | 2026-04-01 00:45:13.487890 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-04-01 00:45:13.487895 | orchestrator | 2026-04-01 00:45:13.487900 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-01 00:45:13.487905 | orchestrator | Wednesday 01 April 2026 00:45:13 +0000 (0:00:02.038) 0:00:14.300 ******* 2026-04-01 00:45:13.487910 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-04-01 00:45:13.487915 | orchestrator | 2026-04-01 00:45:13.487920 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-01 00:45:13.487930 | orchestrator | Wednesday 01 April 2026 00:45:13 +0000 (0:00:00.238) 0:00:14.539 ******* 2026-04-01 00:45:13.487935 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:45:13.487940 | orchestrator | 2026-04-01 00:45:13.487954 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:45:20.145006 | orchestrator | Wednesday 01 April 2026 00:45:13 +0000 (0:00:00.198) 0:00:14.737 ******* 2026-04-01 00:45:20.145105 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-04-01 00:45:20.145117 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-04-01 00:45:20.145123 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-04-01 00:45:20.145129 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-04-01 00:45:20.145135 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-04-01 00:45:20.145141 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-04-01 00:45:20.145147 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-04-01 00:45:20.145153 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-04-01 00:45:20.145163 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-04-01 00:45:20.145170 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-04-01 00:45:20.145176 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-04-01 00:45:20.145182 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-04-01 00:45:20.145188 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-04-01 00:45:20.145194 | orchestrator | 2026-04-01 00:45:20.145215 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:45:20.145228 | orchestrator | Wednesday 01 April 2026 00:45:13 +0000 (0:00:00.354) 0:00:15.092 ******* 2026-04-01 00:45:20.145235 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:45:20.145242 | orchestrator | 2026-04-01 00:45:20.145249 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:45:20.145256 | orchestrator | Wednesday 01 April 2026 00:45:14 +0000 (0:00:00.184) 0:00:15.276 ******* 2026-04-01 00:45:20.145262 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:45:20.145289 | orchestrator | 2026-04-01 00:45:20.145296 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:45:20.145301 | orchestrator | Wednesday 01 April 2026 00:45:14 +0000 (0:00:00.221) 0:00:15.498 ******* 2026-04-01 00:45:20.145307 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:45:20.145313 | orchestrator | 2026-04-01 00:45:20.145320 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:45:20.145325 | orchestrator | Wednesday 01 April 2026 00:45:14 +0000 (0:00:00.189) 0:00:15.687 ******* 2026-04-01 00:45:20.145331 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:45:20.145337 | orchestrator | 2026-04-01 00:45:20.145343 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:45:20.145349 | orchestrator | Wednesday 01 April 2026 00:45:14 +0000 (0:00:00.197) 0:00:15.884 ******* 2026-04-01 00:45:20.145355 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:45:20.145361 | orchestrator | 2026-04-01 00:45:20.145367 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:45:20.145374 | orchestrator | Wednesday 01 April 2026 00:45:14 +0000 (0:00:00.204) 0:00:16.089 ******* 2026-04-01 00:45:20.145380 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:45:20.145396 | orchestrator | 2026-04-01 00:45:20.145402 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:45:20.145407 | orchestrator | Wednesday 01 April 2026 00:45:15 +0000 (0:00:00.428) 0:00:16.518 ******* 2026-04-01 00:45:20.145412 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:45:20.145417 | orchestrator | 2026-04-01 00:45:20.145423 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:45:20.145428 | orchestrator | Wednesday 01 April 2026 00:45:15 +0000 (0:00:00.181) 0:00:16.700 ******* 2026-04-01 00:45:20.145434 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:45:20.145439 | orchestrator | 2026-04-01 00:45:20.145445 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:45:20.145450 | orchestrator | Wednesday 01 April 2026 00:45:15 +0000 (0:00:00.174) 0:00:16.874 ******* 2026-04-01 00:45:20.145456 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_bc455e6b-b8ba-47d8-ab01-6be8b039ad3d) 2026-04-01 00:45:20.145463 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_bc455e6b-b8ba-47d8-ab01-6be8b039ad3d) 2026-04-01 00:45:20.145469 | orchestrator | 2026-04-01 00:45:20.145475 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:45:20.145497 | orchestrator | Wednesday 01 April 2026 00:45:15 +0000 (0:00:00.365) 0:00:17.240 ******* 2026-04-01 00:45:20.145503 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_1a9aff5c-ee70-4834-ada6-16d88406b9f4) 2026-04-01 00:45:20.145509 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_1a9aff5c-ee70-4834-ada6-16d88406b9f4) 2026-04-01 00:45:20.145515 | orchestrator | 2026-04-01 00:45:20.145522 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:45:20.145529 | orchestrator | Wednesday 01 April 2026 00:45:16 +0000 (0:00:00.364) 0:00:17.605 ******* 2026-04-01 00:45:20.145536 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_93c245f0-d55e-41f5-879e-2175ba1dd005) 2026-04-01 00:45:20.145542 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_93c245f0-d55e-41f5-879e-2175ba1dd005) 2026-04-01 00:45:20.145548 | orchestrator | 2026-04-01 00:45:20.145555 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:45:20.145578 | orchestrator | Wednesday 01 April 2026 00:45:16 +0000 (0:00:00.373) 0:00:17.978 ******* 2026-04-01 00:45:20.145585 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_289dd0c3-dff8-4236-9edf-8ec702693da7) 2026-04-01 00:45:20.145591 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_289dd0c3-dff8-4236-9edf-8ec702693da7) 2026-04-01 00:45:20.145597 | orchestrator | 2026-04-01 00:45:20.145603 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:45:20.145621 | orchestrator | Wednesday 01 April 2026 00:45:17 +0000 (0:00:00.391) 0:00:18.369 ******* 2026-04-01 00:45:20.145628 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-01 00:45:20.145634 | orchestrator | 2026-04-01 00:45:20.145640 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:45:20.145647 | orchestrator | Wednesday 01 April 2026 00:45:17 +0000 (0:00:00.302) 0:00:18.672 ******* 2026-04-01 00:45:20.145653 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-04-01 00:45:20.145660 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-04-01 00:45:20.145666 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-04-01 00:45:20.145673 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-04-01 00:45:20.145680 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-04-01 00:45:20.145687 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-04-01 00:45:20.145694 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-04-01 00:45:20.145743 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-04-01 00:45:20.145748 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-04-01 00:45:20.145752 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-04-01 00:45:20.145757 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-04-01 00:45:20.145762 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-04-01 00:45:20.145766 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-04-01 00:45:20.145771 | orchestrator | 2026-04-01 00:45:20.145775 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:45:20.145779 | orchestrator | Wednesday 01 April 2026 00:45:17 +0000 (0:00:00.342) 0:00:19.014 ******* 2026-04-01 00:45:20.145784 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:45:20.145789 | orchestrator | 2026-04-01 00:45:20.145793 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:45:20.145798 | orchestrator | Wednesday 01 April 2026 00:45:17 +0000 (0:00:00.173) 0:00:19.188 ******* 2026-04-01 00:45:20.145802 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:45:20.145807 | orchestrator | 2026-04-01 00:45:20.145811 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:45:20.145816 | orchestrator | Wednesday 01 April 2026 00:45:18 +0000 (0:00:00.451) 0:00:19.639 ******* 2026-04-01 00:45:20.145821 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:45:20.145828 | orchestrator | 2026-04-01 00:45:20.145834 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:45:20.145840 | orchestrator | Wednesday 01 April 2026 00:45:18 +0000 (0:00:00.181) 0:00:19.820 ******* 2026-04-01 00:45:20.145846 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:45:20.145852 | orchestrator | 2026-04-01 00:45:20.145858 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:45:20.145863 | orchestrator | Wednesday 01 April 2026 00:45:18 +0000 (0:00:00.177) 0:00:19.998 ******* 2026-04-01 00:45:20.145869 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:45:20.145876 | orchestrator | 2026-04-01 00:45:20.145881 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:45:20.145887 | orchestrator | Wednesday 01 April 2026 00:45:18 +0000 (0:00:00.182) 0:00:20.180 ******* 2026-04-01 00:45:20.145893 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:45:20.145898 | orchestrator | 2026-04-01 00:45:20.145905 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:45:20.145928 | orchestrator | Wednesday 01 April 2026 00:45:19 +0000 (0:00:00.179) 0:00:20.359 ******* 2026-04-01 00:45:20.145934 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:45:20.145941 | orchestrator | 2026-04-01 00:45:20.145947 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:45:20.145953 | orchestrator | Wednesday 01 April 2026 00:45:19 +0000 (0:00:00.166) 0:00:20.526 ******* 2026-04-01 00:45:20.145959 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:45:20.145965 | orchestrator | 2026-04-01 00:45:20.145970 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:45:20.145976 | orchestrator | Wednesday 01 April 2026 00:45:19 +0000 (0:00:00.176) 0:00:20.702 ******* 2026-04-01 00:45:20.145982 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-04-01 00:45:20.145989 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-04-01 00:45:20.145996 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-04-01 00:45:20.146002 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-04-01 00:45:20.146008 | orchestrator | 2026-04-01 00:45:20.146060 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:45:20.146065 | orchestrator | Wednesday 01 April 2026 00:45:20 +0000 (0:00:00.585) 0:00:21.287 ******* 2026-04-01 00:45:20.146069 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:45:25.648821 | orchestrator | 2026-04-01 00:45:25.648912 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:45:25.648925 | orchestrator | Wednesday 01 April 2026 00:45:20 +0000 (0:00:00.182) 0:00:21.469 ******* 2026-04-01 00:45:25.648933 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:45:25.648940 | orchestrator | 2026-04-01 00:45:25.648947 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:45:25.648954 | orchestrator | Wednesday 01 April 2026 00:45:20 +0000 (0:00:00.169) 0:00:21.639 ******* 2026-04-01 00:45:25.648961 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:45:25.648968 | orchestrator | 2026-04-01 00:45:25.648974 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:45:25.648980 | orchestrator | Wednesday 01 April 2026 00:45:20 +0000 (0:00:00.174) 0:00:21.813 ******* 2026-04-01 00:45:25.648986 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:45:25.648991 | orchestrator | 2026-04-01 00:45:25.648998 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-04-01 00:45:25.649004 | orchestrator | Wednesday 01 April 2026 00:45:20 +0000 (0:00:00.176) 0:00:21.990 ******* 2026-04-01 00:45:25.649011 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-04-01 00:45:25.649018 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-04-01 00:45:25.649024 | orchestrator | 2026-04-01 00:45:25.649031 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-04-01 00:45:25.649046 | orchestrator | Wednesday 01 April 2026 00:45:21 +0000 (0:00:00.282) 0:00:22.273 ******* 2026-04-01 00:45:25.649052 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:45:25.649058 | orchestrator | 2026-04-01 00:45:25.649064 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-04-01 00:45:25.649070 | orchestrator | Wednesday 01 April 2026 00:45:21 +0000 (0:00:00.125) 0:00:22.399 ******* 2026-04-01 00:45:25.649076 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:45:25.649081 | orchestrator | 2026-04-01 00:45:25.649088 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-04-01 00:45:25.649094 | orchestrator | Wednesday 01 April 2026 00:45:21 +0000 (0:00:00.121) 0:00:22.520 ******* 2026-04-01 00:45:25.649101 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:45:25.649107 | orchestrator | 2026-04-01 00:45:25.649114 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-04-01 00:45:25.649121 | orchestrator | Wednesday 01 April 2026 00:45:21 +0000 (0:00:00.121) 0:00:22.641 ******* 2026-04-01 00:45:25.649127 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:45:25.649156 | orchestrator | 2026-04-01 00:45:25.649163 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-04-01 00:45:25.649170 | orchestrator | Wednesday 01 April 2026 00:45:21 +0000 (0:00:00.126) 0:00:22.768 ******* 2026-04-01 00:45:25.649177 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '00bcfd13-59f0-54da-b43f-34edf6af7c7d'}}) 2026-04-01 00:45:25.649184 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2f8eedd5-4e35-5081-a67e-565e77fef082'}}) 2026-04-01 00:45:25.649191 | orchestrator | 2026-04-01 00:45:25.649197 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-04-01 00:45:25.649204 | orchestrator | Wednesday 01 April 2026 00:45:21 +0000 (0:00:00.140) 0:00:22.909 ******* 2026-04-01 00:45:25.649211 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '00bcfd13-59f0-54da-b43f-34edf6af7c7d'}})  2026-04-01 00:45:25.649219 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2f8eedd5-4e35-5081-a67e-565e77fef082'}})  2026-04-01 00:45:25.649225 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:45:25.649231 | orchestrator | 2026-04-01 00:45:25.649238 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-04-01 00:45:25.649244 | orchestrator | Wednesday 01 April 2026 00:45:21 +0000 (0:00:00.136) 0:00:23.045 ******* 2026-04-01 00:45:25.649250 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '00bcfd13-59f0-54da-b43f-34edf6af7c7d'}})  2026-04-01 00:45:25.649257 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2f8eedd5-4e35-5081-a67e-565e77fef082'}})  2026-04-01 00:45:25.649263 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:45:25.649270 | orchestrator | 2026-04-01 00:45:25.649276 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-04-01 00:45:25.649283 | orchestrator | Wednesday 01 April 2026 00:45:21 +0000 (0:00:00.158) 0:00:23.204 ******* 2026-04-01 00:45:25.649289 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '00bcfd13-59f0-54da-b43f-34edf6af7c7d'}})  2026-04-01 00:45:25.649295 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2f8eedd5-4e35-5081-a67e-565e77fef082'}})  2026-04-01 00:45:25.649302 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:45:25.649308 | orchestrator | 2026-04-01 00:45:25.649315 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-04-01 00:45:25.649335 | orchestrator | Wednesday 01 April 2026 00:45:22 +0000 (0:00:00.125) 0:00:23.330 ******* 2026-04-01 00:45:25.649342 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:45:25.649348 | orchestrator | 2026-04-01 00:45:25.649354 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-04-01 00:45:25.649361 | orchestrator | Wednesday 01 April 2026 00:45:22 +0000 (0:00:00.116) 0:00:23.446 ******* 2026-04-01 00:45:25.649373 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:45:25.649380 | orchestrator | 2026-04-01 00:45:25.649386 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-04-01 00:45:25.649392 | orchestrator | Wednesday 01 April 2026 00:45:22 +0000 (0:00:00.124) 0:00:23.571 ******* 2026-04-01 00:45:25.649413 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:45:25.649420 | orchestrator | 2026-04-01 00:45:25.649426 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-04-01 00:45:25.649432 | orchestrator | Wednesday 01 April 2026 00:45:22 +0000 (0:00:00.114) 0:00:23.686 ******* 2026-04-01 00:45:25.649439 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:45:25.649445 | orchestrator | 2026-04-01 00:45:25.649452 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-04-01 00:45:25.649458 | orchestrator | Wednesday 01 April 2026 00:45:22 +0000 (0:00:00.256) 0:00:23.943 ******* 2026-04-01 00:45:25.649464 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:45:25.649470 | orchestrator | 2026-04-01 00:45:25.649476 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-04-01 00:45:25.649489 | orchestrator | Wednesday 01 April 2026 00:45:22 +0000 (0:00:00.121) 0:00:24.064 ******* 2026-04-01 00:45:25.649495 | orchestrator | ok: [testbed-node-4] => { 2026-04-01 00:45:25.649502 | orchestrator |  "ceph_osd_devices": { 2026-04-01 00:45:25.649508 | orchestrator |  "sdb": { 2026-04-01 00:45:25.649515 | orchestrator |  "osd_lvm_uuid": "00bcfd13-59f0-54da-b43f-34edf6af7c7d" 2026-04-01 00:45:25.649522 | orchestrator |  }, 2026-04-01 00:45:25.649528 | orchestrator |  "sdc": { 2026-04-01 00:45:25.649534 | orchestrator |  "osd_lvm_uuid": "2f8eedd5-4e35-5081-a67e-565e77fef082" 2026-04-01 00:45:25.649540 | orchestrator |  } 2026-04-01 00:45:25.649546 | orchestrator |  } 2026-04-01 00:45:25.649553 | orchestrator | } 2026-04-01 00:45:25.649559 | orchestrator | 2026-04-01 00:45:25.649566 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-04-01 00:45:25.649573 | orchestrator | Wednesday 01 April 2026 00:45:22 +0000 (0:00:00.119) 0:00:24.183 ******* 2026-04-01 00:45:25.649580 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:45:25.649586 | orchestrator | 2026-04-01 00:45:25.649593 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-04-01 00:45:25.649599 | orchestrator | Wednesday 01 April 2026 00:45:23 +0000 (0:00:00.121) 0:00:24.304 ******* 2026-04-01 00:45:25.649606 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:45:25.649612 | orchestrator | 2026-04-01 00:45:25.649619 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-04-01 00:45:25.649625 | orchestrator | Wednesday 01 April 2026 00:45:23 +0000 (0:00:00.146) 0:00:24.451 ******* 2026-04-01 00:45:25.649631 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:45:25.649638 | orchestrator | 2026-04-01 00:45:25.649644 | orchestrator | TASK [Print configuration data] ************************************************ 2026-04-01 00:45:25.649651 | orchestrator | Wednesday 01 April 2026 00:45:23 +0000 (0:00:00.123) 0:00:24.574 ******* 2026-04-01 00:45:25.649658 | orchestrator | changed: [testbed-node-4] => { 2026-04-01 00:45:25.649664 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-04-01 00:45:25.649671 | orchestrator |  "ceph_osd_devices": { 2026-04-01 00:45:25.649677 | orchestrator |  "sdb": { 2026-04-01 00:45:25.649683 | orchestrator |  "osd_lvm_uuid": "00bcfd13-59f0-54da-b43f-34edf6af7c7d" 2026-04-01 00:45:25.649690 | orchestrator |  }, 2026-04-01 00:45:25.649739 | orchestrator |  "sdc": { 2026-04-01 00:45:25.649746 | orchestrator |  "osd_lvm_uuid": "2f8eedd5-4e35-5081-a67e-565e77fef082" 2026-04-01 00:45:25.649752 | orchestrator |  } 2026-04-01 00:45:25.649759 | orchestrator |  }, 2026-04-01 00:45:25.649765 | orchestrator |  "lvm_volumes": [ 2026-04-01 00:45:25.649772 | orchestrator |  { 2026-04-01 00:45:25.649778 | orchestrator |  "data": "osd-block-00bcfd13-59f0-54da-b43f-34edf6af7c7d", 2026-04-01 00:45:25.649785 | orchestrator |  "data_vg": "ceph-00bcfd13-59f0-54da-b43f-34edf6af7c7d" 2026-04-01 00:45:25.649791 | orchestrator |  }, 2026-04-01 00:45:25.649797 | orchestrator |  { 2026-04-01 00:45:25.649804 | orchestrator |  "data": "osd-block-2f8eedd5-4e35-5081-a67e-565e77fef082", 2026-04-01 00:45:25.649810 | orchestrator |  "data_vg": "ceph-2f8eedd5-4e35-5081-a67e-565e77fef082" 2026-04-01 00:45:25.649817 | orchestrator |  } 2026-04-01 00:45:25.649823 | orchestrator |  ] 2026-04-01 00:45:25.649829 | orchestrator |  } 2026-04-01 00:45:25.649836 | orchestrator | } 2026-04-01 00:45:25.649842 | orchestrator | 2026-04-01 00:45:25.649850 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-04-01 00:45:25.649857 | orchestrator | Wednesday 01 April 2026 00:45:23 +0000 (0:00:00.183) 0:00:24.757 ******* 2026-04-01 00:45:25.649863 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-04-01 00:45:25.649870 | orchestrator | 2026-04-01 00:45:25.649877 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-04-01 00:45:25.649889 | orchestrator | 2026-04-01 00:45:25.649896 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-01 00:45:25.649903 | orchestrator | Wednesday 01 April 2026 00:45:24 +0000 (0:00:01.021) 0:00:25.779 ******* 2026-04-01 00:45:25.649909 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-04-01 00:45:25.649915 | orchestrator | 2026-04-01 00:45:25.649921 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-01 00:45:25.649927 | orchestrator | Wednesday 01 April 2026 00:45:24 +0000 (0:00:00.361) 0:00:26.141 ******* 2026-04-01 00:45:25.649933 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:45:25.649939 | orchestrator | 2026-04-01 00:45:25.649945 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:45:25.649951 | orchestrator | Wednesday 01 April 2026 00:45:25 +0000 (0:00:00.476) 0:00:26.618 ******* 2026-04-01 00:45:25.649956 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-04-01 00:45:25.649963 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-04-01 00:45:25.649968 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-04-01 00:45:25.649974 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-04-01 00:45:25.649980 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-04-01 00:45:25.649992 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-04-01 00:45:34.066437 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-04-01 00:45:34.066496 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-04-01 00:45:34.066503 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-04-01 00:45:34.066507 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-04-01 00:45:34.066511 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-04-01 00:45:34.066524 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-04-01 00:45:34.066528 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-04-01 00:45:34.066532 | orchestrator | 2026-04-01 00:45:34.066537 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:45:34.066542 | orchestrator | Wednesday 01 April 2026 00:45:25 +0000 (0:00:00.362) 0:00:26.981 ******* 2026-04-01 00:45:34.066546 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:45:34.066550 | orchestrator | 2026-04-01 00:45:34.066554 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:45:34.066558 | orchestrator | Wednesday 01 April 2026 00:45:25 +0000 (0:00:00.200) 0:00:27.181 ******* 2026-04-01 00:45:34.066563 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:45:34.066566 | orchestrator | 2026-04-01 00:45:34.066570 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:45:34.066574 | orchestrator | Wednesday 01 April 2026 00:45:26 +0000 (0:00:00.190) 0:00:27.371 ******* 2026-04-01 00:45:34.066578 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:45:34.066582 | orchestrator | 2026-04-01 00:45:34.066586 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:45:34.066593 | orchestrator | Wednesday 01 April 2026 00:45:26 +0000 (0:00:00.180) 0:00:27.552 ******* 2026-04-01 00:45:34.066599 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:45:34.066605 | orchestrator | 2026-04-01 00:45:34.066614 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:45:34.066620 | orchestrator | Wednesday 01 April 2026 00:45:26 +0000 (0:00:00.182) 0:00:27.735 ******* 2026-04-01 00:45:34.066626 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:45:34.066644 | orchestrator | 2026-04-01 00:45:34.066649 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:45:34.066652 | orchestrator | Wednesday 01 April 2026 00:45:26 +0000 (0:00:00.194) 0:00:27.929 ******* 2026-04-01 00:45:34.066656 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:45:34.066660 | orchestrator | 2026-04-01 00:45:34.066667 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:45:34.066673 | orchestrator | Wednesday 01 April 2026 00:45:26 +0000 (0:00:00.196) 0:00:28.125 ******* 2026-04-01 00:45:34.066679 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:45:34.066724 | orchestrator | 2026-04-01 00:45:34.066728 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:45:34.066732 | orchestrator | Wednesday 01 April 2026 00:45:27 +0000 (0:00:00.201) 0:00:28.326 ******* 2026-04-01 00:45:34.066736 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:45:34.066740 | orchestrator | 2026-04-01 00:45:34.066744 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:45:34.066748 | orchestrator | Wednesday 01 April 2026 00:45:27 +0000 (0:00:00.196) 0:00:28.523 ******* 2026-04-01 00:45:34.066752 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_bb71c8c2-ca64-4a55-b962-663cadefaf49) 2026-04-01 00:45:34.066756 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_bb71c8c2-ca64-4a55-b962-663cadefaf49) 2026-04-01 00:45:34.066760 | orchestrator | 2026-04-01 00:45:34.066764 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:45:34.066768 | orchestrator | Wednesday 01 April 2026 00:45:27 +0000 (0:00:00.633) 0:00:29.156 ******* 2026-04-01 00:45:34.066772 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_e5794c61-1895-432b-bae0-e64b20adb363) 2026-04-01 00:45:34.066776 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_e5794c61-1895-432b-bae0-e64b20adb363) 2026-04-01 00:45:34.066779 | orchestrator | 2026-04-01 00:45:34.066783 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:45:34.066787 | orchestrator | Wednesday 01 April 2026 00:45:28 +0000 (0:00:00.857) 0:00:30.014 ******* 2026-04-01 00:45:34.066791 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_b090d968-077f-4316-a7cb-bda539f6db67) 2026-04-01 00:45:34.066795 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_b090d968-077f-4316-a7cb-bda539f6db67) 2026-04-01 00:45:34.066799 | orchestrator | 2026-04-01 00:45:34.066803 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:45:34.066806 | orchestrator | Wednesday 01 April 2026 00:45:29 +0000 (0:00:00.467) 0:00:30.482 ******* 2026-04-01 00:45:34.066810 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_ac6b0a42-475e-47b3-b6b9-8775ae6256f7) 2026-04-01 00:45:34.066814 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_ac6b0a42-475e-47b3-b6b9-8775ae6256f7) 2026-04-01 00:45:34.066818 | orchestrator | 2026-04-01 00:45:34.066822 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:45:34.066826 | orchestrator | Wednesday 01 April 2026 00:45:29 +0000 (0:00:00.422) 0:00:30.904 ******* 2026-04-01 00:45:34.066829 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-01 00:45:34.066833 | orchestrator | 2026-04-01 00:45:34.066837 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:45:34.066851 | orchestrator | Wednesday 01 April 2026 00:45:30 +0000 (0:00:00.359) 0:00:31.264 ******* 2026-04-01 00:45:34.066855 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-04-01 00:45:34.066859 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-04-01 00:45:34.066863 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-04-01 00:45:34.066867 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-04-01 00:45:34.066875 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-04-01 00:45:34.066879 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-04-01 00:45:34.066883 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-04-01 00:45:34.066887 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-04-01 00:45:34.066891 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-04-01 00:45:34.066894 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-04-01 00:45:34.066898 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-04-01 00:45:34.066902 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-04-01 00:45:34.066906 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-04-01 00:45:34.066909 | orchestrator | 2026-04-01 00:45:34.066913 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:45:34.066917 | orchestrator | Wednesday 01 April 2026 00:45:30 +0000 (0:00:00.399) 0:00:31.663 ******* 2026-04-01 00:45:34.066921 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:45:34.066925 | orchestrator | 2026-04-01 00:45:34.066929 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:45:34.066933 | orchestrator | Wednesday 01 April 2026 00:45:30 +0000 (0:00:00.238) 0:00:31.902 ******* 2026-04-01 00:45:34.066937 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:45:34.066942 | orchestrator | 2026-04-01 00:45:34.066948 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:45:34.066952 | orchestrator | Wednesday 01 April 2026 00:45:30 +0000 (0:00:00.208) 0:00:32.110 ******* 2026-04-01 00:45:34.066956 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:45:34.066960 | orchestrator | 2026-04-01 00:45:34.066964 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:45:34.066967 | orchestrator | Wednesday 01 April 2026 00:45:31 +0000 (0:00:00.207) 0:00:32.318 ******* 2026-04-01 00:45:34.066971 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:45:34.066975 | orchestrator | 2026-04-01 00:45:34.066982 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:45:34.066986 | orchestrator | Wednesday 01 April 2026 00:45:31 +0000 (0:00:00.201) 0:00:32.519 ******* 2026-04-01 00:45:34.066990 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:45:34.066994 | orchestrator | 2026-04-01 00:45:34.066998 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:45:34.067001 | orchestrator | Wednesday 01 April 2026 00:45:31 +0000 (0:00:00.199) 0:00:32.719 ******* 2026-04-01 00:45:34.067005 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:45:34.067009 | orchestrator | 2026-04-01 00:45:34.067013 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:45:34.067017 | orchestrator | Wednesday 01 April 2026 00:45:32 +0000 (0:00:00.661) 0:00:33.381 ******* 2026-04-01 00:45:34.067021 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:45:34.067025 | orchestrator | 2026-04-01 00:45:34.067030 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:45:34.067035 | orchestrator | Wednesday 01 April 2026 00:45:32 +0000 (0:00:00.197) 0:00:33.579 ******* 2026-04-01 00:45:34.067039 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:45:34.067044 | orchestrator | 2026-04-01 00:45:34.067048 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:45:34.067053 | orchestrator | Wednesday 01 April 2026 00:45:32 +0000 (0:00:00.185) 0:00:33.765 ******* 2026-04-01 00:45:34.067057 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-04-01 00:45:34.067062 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-04-01 00:45:34.067070 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-04-01 00:45:34.067075 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-04-01 00:45:34.067079 | orchestrator | 2026-04-01 00:45:34.067084 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:45:34.067088 | orchestrator | Wednesday 01 April 2026 00:45:33 +0000 (0:00:00.775) 0:00:34.541 ******* 2026-04-01 00:45:34.067093 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:45:34.067097 | orchestrator | 2026-04-01 00:45:34.067102 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:45:34.067106 | orchestrator | Wednesday 01 April 2026 00:45:33 +0000 (0:00:00.189) 0:00:34.730 ******* 2026-04-01 00:45:34.067111 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:45:34.067115 | orchestrator | 2026-04-01 00:45:34.067120 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:45:34.067125 | orchestrator | Wednesday 01 April 2026 00:45:33 +0000 (0:00:00.201) 0:00:34.932 ******* 2026-04-01 00:45:34.067129 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:45:34.067133 | orchestrator | 2026-04-01 00:45:34.067138 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:45:34.067143 | orchestrator | Wednesday 01 April 2026 00:45:33 +0000 (0:00:00.199) 0:00:35.131 ******* 2026-04-01 00:45:34.067147 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:45:34.067152 | orchestrator | 2026-04-01 00:45:34.067159 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-04-01 00:45:38.337590 | orchestrator | Wednesday 01 April 2026 00:45:34 +0000 (0:00:00.184) 0:00:35.316 ******* 2026-04-01 00:45:38.337673 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-04-01 00:45:38.337735 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-04-01 00:45:38.337744 | orchestrator | 2026-04-01 00:45:38.337753 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-04-01 00:45:38.337761 | orchestrator | Wednesday 01 April 2026 00:45:34 +0000 (0:00:00.178) 0:00:35.494 ******* 2026-04-01 00:45:38.337770 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:45:38.337778 | orchestrator | 2026-04-01 00:45:38.337786 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-04-01 00:45:38.337795 | orchestrator | Wednesday 01 April 2026 00:45:34 +0000 (0:00:00.122) 0:00:35.617 ******* 2026-04-01 00:45:38.337803 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:45:38.337811 | orchestrator | 2026-04-01 00:45:38.337819 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-04-01 00:45:38.337827 | orchestrator | Wednesday 01 April 2026 00:45:34 +0000 (0:00:00.126) 0:00:35.744 ******* 2026-04-01 00:45:38.337835 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:45:38.337843 | orchestrator | 2026-04-01 00:45:38.337851 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-04-01 00:45:38.337860 | orchestrator | Wednesday 01 April 2026 00:45:34 +0000 (0:00:00.150) 0:00:35.894 ******* 2026-04-01 00:45:38.337868 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:45:38.337877 | orchestrator | 2026-04-01 00:45:38.337885 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-04-01 00:45:38.337893 | orchestrator | Wednesday 01 April 2026 00:45:35 +0000 (0:00:00.373) 0:00:36.268 ******* 2026-04-01 00:45:38.337902 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c7c10550-c1bc-5fe3-90d5-7d7a9167f51f'}}) 2026-04-01 00:45:38.337910 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd3162267-511d-5f73-a1c4-60a47e452e5f'}}) 2026-04-01 00:45:38.337918 | orchestrator | 2026-04-01 00:45:38.337926 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-04-01 00:45:38.337934 | orchestrator | Wednesday 01 April 2026 00:45:35 +0000 (0:00:00.174) 0:00:36.443 ******* 2026-04-01 00:45:38.337942 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c7c10550-c1bc-5fe3-90d5-7d7a9167f51f'}})  2026-04-01 00:45:38.337972 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd3162267-511d-5f73-a1c4-60a47e452e5f'}})  2026-04-01 00:45:38.337981 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:45:38.337989 | orchestrator | 2026-04-01 00:45:38.337998 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-04-01 00:45:38.338006 | orchestrator | Wednesday 01 April 2026 00:45:35 +0000 (0:00:00.155) 0:00:36.598 ******* 2026-04-01 00:45:38.338054 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c7c10550-c1bc-5fe3-90d5-7d7a9167f51f'}})  2026-04-01 00:45:38.338065 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd3162267-511d-5f73-a1c4-60a47e452e5f'}})  2026-04-01 00:45:38.338073 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:45:38.338081 | orchestrator | 2026-04-01 00:45:38.338089 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-04-01 00:45:38.338098 | orchestrator | Wednesday 01 April 2026 00:45:35 +0000 (0:00:00.148) 0:00:36.747 ******* 2026-04-01 00:45:38.338106 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c7c10550-c1bc-5fe3-90d5-7d7a9167f51f'}})  2026-04-01 00:45:38.338114 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd3162267-511d-5f73-a1c4-60a47e452e5f'}})  2026-04-01 00:45:38.338122 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:45:38.338130 | orchestrator | 2026-04-01 00:45:38.338138 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-04-01 00:45:38.338146 | orchestrator | Wednesday 01 April 2026 00:45:35 +0000 (0:00:00.177) 0:00:36.924 ******* 2026-04-01 00:45:38.338154 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:45:38.338162 | orchestrator | 2026-04-01 00:45:38.338170 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-04-01 00:45:38.338178 | orchestrator | Wednesday 01 April 2026 00:45:35 +0000 (0:00:00.127) 0:00:37.052 ******* 2026-04-01 00:45:38.338186 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:45:38.338194 | orchestrator | 2026-04-01 00:45:38.338202 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-04-01 00:45:38.338210 | orchestrator | Wednesday 01 April 2026 00:45:35 +0000 (0:00:00.138) 0:00:37.190 ******* 2026-04-01 00:45:38.338218 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:45:38.338226 | orchestrator | 2026-04-01 00:45:38.338234 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-04-01 00:45:38.338242 | orchestrator | Wednesday 01 April 2026 00:45:36 +0000 (0:00:00.119) 0:00:37.309 ******* 2026-04-01 00:45:38.338250 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:45:38.338258 | orchestrator | 2026-04-01 00:45:38.338266 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-04-01 00:45:38.338274 | orchestrator | Wednesday 01 April 2026 00:45:36 +0000 (0:00:00.127) 0:00:37.437 ******* 2026-04-01 00:45:38.338282 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:45:38.338290 | orchestrator | 2026-04-01 00:45:38.338298 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-04-01 00:45:38.338306 | orchestrator | Wednesday 01 April 2026 00:45:36 +0000 (0:00:00.132) 0:00:37.569 ******* 2026-04-01 00:45:38.338314 | orchestrator | ok: [testbed-node-5] => { 2026-04-01 00:45:38.338332 | orchestrator |  "ceph_osd_devices": { 2026-04-01 00:45:38.338341 | orchestrator |  "sdb": { 2026-04-01 00:45:38.338376 | orchestrator |  "osd_lvm_uuid": "c7c10550-c1bc-5fe3-90d5-7d7a9167f51f" 2026-04-01 00:45:38.338386 | orchestrator |  }, 2026-04-01 00:45:38.338394 | orchestrator |  "sdc": { 2026-04-01 00:45:38.338402 | orchestrator |  "osd_lvm_uuid": "d3162267-511d-5f73-a1c4-60a47e452e5f" 2026-04-01 00:45:38.338410 | orchestrator |  } 2026-04-01 00:45:38.338418 | orchestrator |  } 2026-04-01 00:45:38.338426 | orchestrator | } 2026-04-01 00:45:38.338434 | orchestrator | 2026-04-01 00:45:38.338453 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-04-01 00:45:38.338468 | orchestrator | Wednesday 01 April 2026 00:45:36 +0000 (0:00:00.139) 0:00:37.708 ******* 2026-04-01 00:45:38.338477 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:45:38.338485 | orchestrator | 2026-04-01 00:45:38.338493 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-04-01 00:45:38.338503 | orchestrator | Wednesday 01 April 2026 00:45:36 +0000 (0:00:00.123) 0:00:37.832 ******* 2026-04-01 00:45:38.338517 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:45:38.338526 | orchestrator | 2026-04-01 00:45:38.338534 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-04-01 00:45:38.338542 | orchestrator | Wednesday 01 April 2026 00:45:36 +0000 (0:00:00.330) 0:00:38.163 ******* 2026-04-01 00:45:38.338550 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:45:38.338557 | orchestrator | 2026-04-01 00:45:38.338565 | orchestrator | TASK [Print configuration data] ************************************************ 2026-04-01 00:45:38.338573 | orchestrator | Wednesday 01 April 2026 00:45:37 +0000 (0:00:00.131) 0:00:38.294 ******* 2026-04-01 00:45:38.338581 | orchestrator | changed: [testbed-node-5] => { 2026-04-01 00:45:38.338589 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-04-01 00:45:38.338597 | orchestrator |  "ceph_osd_devices": { 2026-04-01 00:45:38.338605 | orchestrator |  "sdb": { 2026-04-01 00:45:38.338613 | orchestrator |  "osd_lvm_uuid": "c7c10550-c1bc-5fe3-90d5-7d7a9167f51f" 2026-04-01 00:45:38.338621 | orchestrator |  }, 2026-04-01 00:45:38.338629 | orchestrator |  "sdc": { 2026-04-01 00:45:38.338637 | orchestrator |  "osd_lvm_uuid": "d3162267-511d-5f73-a1c4-60a47e452e5f" 2026-04-01 00:45:38.338645 | orchestrator |  } 2026-04-01 00:45:38.338657 | orchestrator |  }, 2026-04-01 00:45:38.338671 | orchestrator |  "lvm_volumes": [ 2026-04-01 00:45:38.338696 | orchestrator |  { 2026-04-01 00:45:38.338705 | orchestrator |  "data": "osd-block-c7c10550-c1bc-5fe3-90d5-7d7a9167f51f", 2026-04-01 00:45:38.338713 | orchestrator |  "data_vg": "ceph-c7c10550-c1bc-5fe3-90d5-7d7a9167f51f" 2026-04-01 00:45:38.338721 | orchestrator |  }, 2026-04-01 00:45:38.338729 | orchestrator |  { 2026-04-01 00:45:38.338741 | orchestrator |  "data": "osd-block-d3162267-511d-5f73-a1c4-60a47e452e5f", 2026-04-01 00:45:38.338749 | orchestrator |  "data_vg": "ceph-d3162267-511d-5f73-a1c4-60a47e452e5f" 2026-04-01 00:45:38.338757 | orchestrator |  } 2026-04-01 00:45:38.338765 | orchestrator |  ] 2026-04-01 00:45:38.338773 | orchestrator |  } 2026-04-01 00:45:38.338781 | orchestrator | } 2026-04-01 00:45:38.338789 | orchestrator | 2026-04-01 00:45:38.338797 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-04-01 00:45:38.338805 | orchestrator | Wednesday 01 April 2026 00:45:37 +0000 (0:00:00.203) 0:00:38.498 ******* 2026-04-01 00:45:38.338813 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-04-01 00:45:38.338821 | orchestrator | 2026-04-01 00:45:38.338829 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 00:45:38.338837 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-01 00:45:38.338846 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-01 00:45:38.338854 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-01 00:45:38.338862 | orchestrator | 2026-04-01 00:45:38.338873 | orchestrator | 2026-04-01 00:45:38.338887 | orchestrator | 2026-04-01 00:45:38.338904 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 00:45:38.338924 | orchestrator | Wednesday 01 April 2026 00:45:38 +0000 (0:00:01.073) 0:00:39.571 ******* 2026-04-01 00:45:38.338937 | orchestrator | =============================================================================== 2026-04-01 00:45:38.338958 | orchestrator | Write configuration file ------------------------------------------------ 4.13s 2026-04-01 00:45:38.338970 | orchestrator | Add known partitions to the list of available block devices ------------- 1.11s 2026-04-01 00:45:38.338982 | orchestrator | Add known links to the list of available block devices ------------------ 1.08s 2026-04-01 00:45:38.338994 | orchestrator | Get initial list of available block devices ----------------------------- 0.90s 2026-04-01 00:45:38.339006 | orchestrator | Add known partitions to the list of available block devices ------------- 0.88s 2026-04-01 00:45:38.339019 | orchestrator | Add known links to the list of available block devices ------------------ 0.86s 2026-04-01 00:45:38.339031 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.83s 2026-04-01 00:45:38.339042 | orchestrator | Add known partitions to the list of available block devices ------------- 0.78s 2026-04-01 00:45:38.339054 | orchestrator | Add known links to the list of available block devices ------------------ 0.74s 2026-04-01 00:45:38.339067 | orchestrator | Add known links to the list of available block devices ------------------ 0.70s 2026-04-01 00:45:38.339080 | orchestrator | Add known partitions to the list of available block devices ------------- 0.66s 2026-04-01 00:45:38.339093 | orchestrator | Define lvm_volumes structures ------------------------------------------- 0.65s 2026-04-01 00:45:38.339107 | orchestrator | Add known links to the list of available block devices ------------------ 0.63s 2026-04-01 00:45:38.339129 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.63s 2026-04-01 00:45:38.698273 | orchestrator | Add known links to the list of available block devices ------------------ 0.63s 2026-04-01 00:45:38.698343 | orchestrator | Print DB devices -------------------------------------------------------- 0.61s 2026-04-01 00:45:38.698349 | orchestrator | Generate lvm_volumes structure (block + db + wal) ----------------------- 0.59s 2026-04-01 00:45:38.698354 | orchestrator | Add known partitions to the list of available block devices ------------- 0.59s 2026-04-01 00:45:38.698358 | orchestrator | Print configuration data ------------------------------------------------ 0.58s 2026-04-01 00:45:38.698362 | orchestrator | Set WAL devices config data --------------------------------------------- 0.51s 2026-04-01 00:46:00.478267 | orchestrator | 2026-04-01 00:46:00 | INFO  | Task a3659eea-ab05-4534-a2fd-8c86f9f3ec11 (sync inventory) is running in background. Output coming soon. 2026-04-01 00:46:27.979576 | orchestrator | 2026-04-01 00:46:01 | INFO  | Starting group_vars file reorganization 2026-04-01 00:46:27.979765 | orchestrator | 2026-04-01 00:46:01 | INFO  | Moved 0 file(s) to their respective directories 2026-04-01 00:46:27.979793 | orchestrator | 2026-04-01 00:46:01 | INFO  | Group_vars file reorganization completed 2026-04-01 00:46:27.979812 | orchestrator | 2026-04-01 00:46:04 | INFO  | Starting variable preparation from inventory 2026-04-01 00:46:27.979827 | orchestrator | 2026-04-01 00:46:07 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-04-01 00:46:27.979838 | orchestrator | 2026-04-01 00:46:07 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-04-01 00:46:27.979848 | orchestrator | 2026-04-01 00:46:07 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-04-01 00:46:27.979858 | orchestrator | 2026-04-01 00:46:07 | INFO  | 3 file(s) written, 6 host(s) processed 2026-04-01 00:46:27.979869 | orchestrator | 2026-04-01 00:46:07 | INFO  | Variable preparation completed 2026-04-01 00:46:27.979879 | orchestrator | 2026-04-01 00:46:08 | INFO  | Starting inventory overwrite handling 2026-04-01 00:46:27.979889 | orchestrator | 2026-04-01 00:46:08 | INFO  | Handling group overwrites in 99-overwrite 2026-04-01 00:46:27.979899 | orchestrator | 2026-04-01 00:46:08 | INFO  | Removing group frr:children from 60-generic 2026-04-01 00:46:27.979909 | orchestrator | 2026-04-01 00:46:08 | INFO  | Removing group netbird:children from 50-infrastructure 2026-04-01 00:46:27.979947 | orchestrator | 2026-04-01 00:46:08 | INFO  | Removing group ceph-rgw from 50-ceph 2026-04-01 00:46:27.979960 | orchestrator | 2026-04-01 00:46:08 | INFO  | Removing group ceph-mds from 50-ceph 2026-04-01 00:46:27.979971 | orchestrator | 2026-04-01 00:46:08 | INFO  | Handling group overwrites in 20-roles 2026-04-01 00:46:27.979982 | orchestrator | 2026-04-01 00:46:08 | INFO  | Removing group k3s_node from 50-infrastructure 2026-04-01 00:46:27.979993 | orchestrator | 2026-04-01 00:46:08 | INFO  | Removed 5 group(s) in total 2026-04-01 00:46:27.980005 | orchestrator | 2026-04-01 00:46:08 | INFO  | Inventory overwrite handling completed 2026-04-01 00:46:27.980016 | orchestrator | 2026-04-01 00:46:09 | INFO  | Starting merge of inventory files 2026-04-01 00:46:27.980026 | orchestrator | 2026-04-01 00:46:09 | INFO  | Inventory files merged successfully 2026-04-01 00:46:27.980037 | orchestrator | 2026-04-01 00:46:13 | INFO  | Generating minified hosts file 2026-04-01 00:46:27.980049 | orchestrator | 2026-04-01 00:46:14 | INFO  | Successfully wrote minified hosts file to /inventory.merge/hosts-minified.yml 2026-04-01 00:46:27.980061 | orchestrator | 2026-04-01 00:46:14 | INFO  | Successfully wrote fast inventory to /inventory.merge/fast/hosts.json 2026-04-01 00:46:27.980072 | orchestrator | 2026-04-01 00:46:15 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-04-01 00:46:27.980083 | orchestrator | 2026-04-01 00:46:26 | INFO  | Successfully wrote ClusterShell configuration 2026-04-01 00:46:27.980111 | orchestrator | [master 4cbad24] 2026-04-01-00-46 2026-04-01 00:46:27.980126 | orchestrator | 5 files changed, 75 insertions(+), 10 deletions(-) 2026-04-01 00:46:27.980141 | orchestrator | create mode 100644 fast/host_vars/testbed-node-3/ceph-lvm-configuration.yml 2026-04-01 00:46:27.980154 | orchestrator | create mode 100644 fast/host_vars/testbed-node-4/ceph-lvm-configuration.yml 2026-04-01 00:46:27.980166 | orchestrator | create mode 100644 fast/host_vars/testbed-node-5/ceph-lvm-configuration.yml 2026-04-01 00:46:29.231057 | orchestrator | 2026-04-01 00:46:29 | INFO  | Prepare task for execution of ceph-create-lvm-devices. 2026-04-01 00:46:29.291726 | orchestrator | 2026-04-01 00:46:29 | INFO  | Task ab213c51-d1e6-4491-9e4e-74a5728d1007 (ceph-create-lvm-devices) was prepared for execution. 2026-04-01 00:46:29.291814 | orchestrator | 2026-04-01 00:46:29 | INFO  | It takes a moment until task ab213c51-d1e6-4491-9e4e-74a5728d1007 (ceph-create-lvm-devices) has been started and output is visible here. 2026-04-01 00:46:39.789110 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-01 00:46:39.789208 | orchestrator | 2.16.14 2026-04-01 00:46:39.789222 | orchestrator | 2026-04-01 00:46:39.789233 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-04-01 00:46:39.789243 | orchestrator | 2026-04-01 00:46:39.789252 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-01 00:46:39.789262 | orchestrator | Wednesday 01 April 2026 00:46:33 +0000 (0:00:00.250) 0:00:00.250 ******* 2026-04-01 00:46:39.789271 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-01 00:46:39.789280 | orchestrator | 2026-04-01 00:46:39.789289 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-01 00:46:39.789298 | orchestrator | Wednesday 01 April 2026 00:46:33 +0000 (0:00:00.243) 0:00:00.494 ******* 2026-04-01 00:46:39.789307 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:46:39.789316 | orchestrator | 2026-04-01 00:46:39.789325 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:46:39.789334 | orchestrator | Wednesday 01 April 2026 00:46:33 +0000 (0:00:00.200) 0:00:00.694 ******* 2026-04-01 00:46:39.789343 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-04-01 00:46:39.789372 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-04-01 00:46:39.789381 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-04-01 00:46:39.789390 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-04-01 00:46:39.789398 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-04-01 00:46:39.789407 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-04-01 00:46:39.789429 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-04-01 00:46:39.789438 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-04-01 00:46:39.789448 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-04-01 00:46:39.789456 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-04-01 00:46:39.789465 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-04-01 00:46:39.789474 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-04-01 00:46:39.789483 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-04-01 00:46:39.789491 | orchestrator | 2026-04-01 00:46:39.789500 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:46:39.789509 | orchestrator | Wednesday 01 April 2026 00:46:34 +0000 (0:00:00.394) 0:00:01.089 ******* 2026-04-01 00:46:39.789518 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:46:39.789527 | orchestrator | 2026-04-01 00:46:39.789536 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:46:39.789545 | orchestrator | Wednesday 01 April 2026 00:46:34 +0000 (0:00:00.382) 0:00:01.472 ******* 2026-04-01 00:46:39.789554 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:46:39.789562 | orchestrator | 2026-04-01 00:46:39.789571 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:46:39.789580 | orchestrator | Wednesday 01 April 2026 00:46:34 +0000 (0:00:00.170) 0:00:01.642 ******* 2026-04-01 00:46:39.789589 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:46:39.789597 | orchestrator | 2026-04-01 00:46:39.789679 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:46:39.789691 | orchestrator | Wednesday 01 April 2026 00:46:34 +0000 (0:00:00.173) 0:00:01.816 ******* 2026-04-01 00:46:39.789701 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:46:39.789712 | orchestrator | 2026-04-01 00:46:39.789722 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:46:39.789732 | orchestrator | Wednesday 01 April 2026 00:46:35 +0000 (0:00:00.178) 0:00:01.994 ******* 2026-04-01 00:46:39.789742 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:46:39.789753 | orchestrator | 2026-04-01 00:46:39.789764 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:46:39.789774 | orchestrator | Wednesday 01 April 2026 00:46:35 +0000 (0:00:00.168) 0:00:02.163 ******* 2026-04-01 00:46:39.789784 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:46:39.789795 | orchestrator | 2026-04-01 00:46:39.789806 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:46:39.789816 | orchestrator | Wednesday 01 April 2026 00:46:35 +0000 (0:00:00.171) 0:00:02.334 ******* 2026-04-01 00:46:39.789826 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:46:39.789837 | orchestrator | 2026-04-01 00:46:39.789847 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:46:39.789857 | orchestrator | Wednesday 01 April 2026 00:46:35 +0000 (0:00:00.175) 0:00:02.510 ******* 2026-04-01 00:46:39.789867 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:46:39.789877 | orchestrator | 2026-04-01 00:46:39.789887 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:46:39.789908 | orchestrator | Wednesday 01 April 2026 00:46:35 +0000 (0:00:00.179) 0:00:02.689 ******* 2026-04-01 00:46:39.789919 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_2a4f4914-3be5-4bda-a47c-01d9519cb486) 2026-04-01 00:46:39.789930 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_2a4f4914-3be5-4bda-a47c-01d9519cb486) 2026-04-01 00:46:39.789941 | orchestrator | 2026-04-01 00:46:39.789951 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:46:39.789976 | orchestrator | Wednesday 01 April 2026 00:46:36 +0000 (0:00:00.376) 0:00:03.066 ******* 2026-04-01 00:46:39.789988 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_181eb0d3-49bd-41f1-8f26-95e9754c9896) 2026-04-01 00:46:39.790003 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_181eb0d3-49bd-41f1-8f26-95e9754c9896) 2026-04-01 00:46:39.790082 | orchestrator | 2026-04-01 00:46:39.790105 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:46:39.790118 | orchestrator | Wednesday 01 April 2026 00:46:36 +0000 (0:00:00.379) 0:00:03.446 ******* 2026-04-01 00:46:39.790130 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_91aabfbd-d205-4d26-bb68-6c75b4d02402) 2026-04-01 00:46:39.790144 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_91aabfbd-d205-4d26-bb68-6c75b4d02402) 2026-04-01 00:46:39.790158 | orchestrator | 2026-04-01 00:46:39.790171 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:46:39.790187 | orchestrator | Wednesday 01 April 2026 00:46:37 +0000 (0:00:00.525) 0:00:03.971 ******* 2026-04-01 00:46:39.790202 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_a922e28e-6911-40ed-8ea7-c2624142d8a1) 2026-04-01 00:46:39.790218 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_a922e28e-6911-40ed-8ea7-c2624142d8a1) 2026-04-01 00:46:39.790233 | orchestrator | 2026-04-01 00:46:39.790244 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:46:39.790252 | orchestrator | Wednesday 01 April 2026 00:46:37 +0000 (0:00:00.512) 0:00:04.484 ******* 2026-04-01 00:46:39.790261 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-01 00:46:39.790270 | orchestrator | 2026-04-01 00:46:39.790278 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:46:39.790287 | orchestrator | Wednesday 01 April 2026 00:46:38 +0000 (0:00:00.592) 0:00:05.077 ******* 2026-04-01 00:46:39.790297 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-04-01 00:46:39.790306 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-04-01 00:46:39.790315 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-04-01 00:46:39.790324 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-04-01 00:46:39.790336 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-04-01 00:46:39.790351 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-04-01 00:46:39.790365 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-04-01 00:46:39.790386 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-04-01 00:46:39.790400 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-04-01 00:46:39.790413 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-04-01 00:46:39.790428 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-04-01 00:46:39.790443 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-04-01 00:46:39.790471 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-04-01 00:46:39.790486 | orchestrator | 2026-04-01 00:46:39.790501 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:46:39.790511 | orchestrator | Wednesday 01 April 2026 00:46:38 +0000 (0:00:00.381) 0:00:05.459 ******* 2026-04-01 00:46:39.790520 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:46:39.790529 | orchestrator | 2026-04-01 00:46:39.790538 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:46:39.790547 | orchestrator | Wednesday 01 April 2026 00:46:38 +0000 (0:00:00.178) 0:00:05.638 ******* 2026-04-01 00:46:39.790556 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:46:39.790565 | orchestrator | 2026-04-01 00:46:39.790574 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:46:39.790583 | orchestrator | Wednesday 01 April 2026 00:46:38 +0000 (0:00:00.182) 0:00:05.820 ******* 2026-04-01 00:46:39.790592 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:46:39.790601 | orchestrator | 2026-04-01 00:46:39.790646 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:46:39.790656 | orchestrator | Wednesday 01 April 2026 00:46:39 +0000 (0:00:00.175) 0:00:05.996 ******* 2026-04-01 00:46:39.790665 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:46:39.790674 | orchestrator | 2026-04-01 00:46:39.790683 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:46:39.790692 | orchestrator | Wednesday 01 April 2026 00:46:39 +0000 (0:00:00.188) 0:00:06.184 ******* 2026-04-01 00:46:39.790701 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:46:39.790710 | orchestrator | 2026-04-01 00:46:39.790719 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:46:39.790728 | orchestrator | Wednesday 01 April 2026 00:46:39 +0000 (0:00:00.175) 0:00:06.360 ******* 2026-04-01 00:46:39.790737 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:46:39.790745 | orchestrator | 2026-04-01 00:46:39.790754 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:46:39.790763 | orchestrator | Wednesday 01 April 2026 00:46:39 +0000 (0:00:00.179) 0:00:06.539 ******* 2026-04-01 00:46:39.790773 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:46:39.790781 | orchestrator | 2026-04-01 00:46:39.790801 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:46:47.390916 | orchestrator | Wednesday 01 April 2026 00:46:39 +0000 (0:00:00.193) 0:00:06.732 ******* 2026-04-01 00:46:47.391043 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:46:47.391055 | orchestrator | 2026-04-01 00:46:47.391063 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:46:47.391070 | orchestrator | Wednesday 01 April 2026 00:46:39 +0000 (0:00:00.174) 0:00:06.907 ******* 2026-04-01 00:46:47.391077 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-04-01 00:46:47.391085 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-04-01 00:46:47.391103 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-04-01 00:46:47.391110 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-04-01 00:46:47.391118 | orchestrator | 2026-04-01 00:46:47.391124 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:46:47.391131 | orchestrator | Wednesday 01 April 2026 00:46:40 +0000 (0:00:00.872) 0:00:07.779 ******* 2026-04-01 00:46:47.391137 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:46:47.391144 | orchestrator | 2026-04-01 00:46:47.391151 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:46:47.391157 | orchestrator | Wednesday 01 April 2026 00:46:41 +0000 (0:00:00.174) 0:00:07.954 ******* 2026-04-01 00:46:47.391164 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:46:47.391170 | orchestrator | 2026-04-01 00:46:47.391176 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:46:47.391182 | orchestrator | Wednesday 01 April 2026 00:46:41 +0000 (0:00:00.187) 0:00:08.142 ******* 2026-04-01 00:46:47.391212 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:46:47.391219 | orchestrator | 2026-04-01 00:46:47.391225 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:46:47.391231 | orchestrator | Wednesday 01 April 2026 00:46:41 +0000 (0:00:00.176) 0:00:08.318 ******* 2026-04-01 00:46:47.391238 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:46:47.391244 | orchestrator | 2026-04-01 00:46:47.391250 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-04-01 00:46:47.391272 | orchestrator | Wednesday 01 April 2026 00:46:41 +0000 (0:00:00.168) 0:00:08.487 ******* 2026-04-01 00:46:47.391279 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:46:47.391285 | orchestrator | 2026-04-01 00:46:47.391291 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-04-01 00:46:47.391298 | orchestrator | Wednesday 01 April 2026 00:46:41 +0000 (0:00:00.124) 0:00:08.611 ******* 2026-04-01 00:46:47.391305 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '070a6fcd-e232-5822-bdac-2856eb469583'}}) 2026-04-01 00:46:47.391312 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '24dba708-820d-5543-af14-6cbe38251993'}}) 2026-04-01 00:46:47.391318 | orchestrator | 2026-04-01 00:46:47.391324 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-04-01 00:46:47.391331 | orchestrator | Wednesday 01 April 2026 00:46:41 +0000 (0:00:00.183) 0:00:08.795 ******* 2026-04-01 00:46:47.391338 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-070a6fcd-e232-5822-bdac-2856eb469583', 'data_vg': 'ceph-070a6fcd-e232-5822-bdac-2856eb469583'}) 2026-04-01 00:46:47.391346 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-24dba708-820d-5543-af14-6cbe38251993', 'data_vg': 'ceph-24dba708-820d-5543-af14-6cbe38251993'}) 2026-04-01 00:46:47.391353 | orchestrator | 2026-04-01 00:46:47.391359 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-04-01 00:46:47.391366 | orchestrator | Wednesday 01 April 2026 00:46:44 +0000 (0:00:02.166) 0:00:10.962 ******* 2026-04-01 00:46:47.391372 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-070a6fcd-e232-5822-bdac-2856eb469583', 'data_vg': 'ceph-070a6fcd-e232-5822-bdac-2856eb469583'})  2026-04-01 00:46:47.391380 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-24dba708-820d-5543-af14-6cbe38251993', 'data_vg': 'ceph-24dba708-820d-5543-af14-6cbe38251993'})  2026-04-01 00:46:47.391387 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:46:47.391393 | orchestrator | 2026-04-01 00:46:47.391399 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-04-01 00:46:47.391405 | orchestrator | Wednesday 01 April 2026 00:46:44 +0000 (0:00:00.136) 0:00:11.099 ******* 2026-04-01 00:46:47.391412 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-070a6fcd-e232-5822-bdac-2856eb469583', 'data_vg': 'ceph-070a6fcd-e232-5822-bdac-2856eb469583'}) 2026-04-01 00:46:47.391418 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-24dba708-820d-5543-af14-6cbe38251993', 'data_vg': 'ceph-24dba708-820d-5543-af14-6cbe38251993'}) 2026-04-01 00:46:47.391426 | orchestrator | 2026-04-01 00:46:47.391433 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-04-01 00:46:47.391440 | orchestrator | Wednesday 01 April 2026 00:46:45 +0000 (0:00:01.511) 0:00:12.611 ******* 2026-04-01 00:46:47.391447 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-070a6fcd-e232-5822-bdac-2856eb469583', 'data_vg': 'ceph-070a6fcd-e232-5822-bdac-2856eb469583'})  2026-04-01 00:46:47.391455 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-24dba708-820d-5543-af14-6cbe38251993', 'data_vg': 'ceph-24dba708-820d-5543-af14-6cbe38251993'})  2026-04-01 00:46:47.391462 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:46:47.391469 | orchestrator | 2026-04-01 00:46:47.391477 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-04-01 00:46:47.391491 | orchestrator | Wednesday 01 April 2026 00:46:45 +0000 (0:00:00.141) 0:00:12.753 ******* 2026-04-01 00:46:47.391516 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:46:47.391523 | orchestrator | 2026-04-01 00:46:47.391530 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-04-01 00:46:47.391537 | orchestrator | Wednesday 01 April 2026 00:46:45 +0000 (0:00:00.122) 0:00:12.875 ******* 2026-04-01 00:46:47.391544 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-070a6fcd-e232-5822-bdac-2856eb469583', 'data_vg': 'ceph-070a6fcd-e232-5822-bdac-2856eb469583'})  2026-04-01 00:46:47.391552 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-24dba708-820d-5543-af14-6cbe38251993', 'data_vg': 'ceph-24dba708-820d-5543-af14-6cbe38251993'})  2026-04-01 00:46:47.391559 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:46:47.391566 | orchestrator | 2026-04-01 00:46:47.391574 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-04-01 00:46:47.391581 | orchestrator | Wednesday 01 April 2026 00:46:46 +0000 (0:00:00.281) 0:00:13.157 ******* 2026-04-01 00:46:47.391588 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:46:47.391616 | orchestrator | 2026-04-01 00:46:47.391624 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-04-01 00:46:47.391631 | orchestrator | Wednesday 01 April 2026 00:46:46 +0000 (0:00:00.126) 0:00:13.284 ******* 2026-04-01 00:46:47.391638 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-070a6fcd-e232-5822-bdac-2856eb469583', 'data_vg': 'ceph-070a6fcd-e232-5822-bdac-2856eb469583'})  2026-04-01 00:46:47.391646 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-24dba708-820d-5543-af14-6cbe38251993', 'data_vg': 'ceph-24dba708-820d-5543-af14-6cbe38251993'})  2026-04-01 00:46:47.391654 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:46:47.391661 | orchestrator | 2026-04-01 00:46:47.391669 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-04-01 00:46:47.391676 | orchestrator | Wednesday 01 April 2026 00:46:46 +0000 (0:00:00.130) 0:00:13.415 ******* 2026-04-01 00:46:47.391683 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:46:47.391691 | orchestrator | 2026-04-01 00:46:47.391699 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-04-01 00:46:47.391706 | orchestrator | Wednesday 01 April 2026 00:46:46 +0000 (0:00:00.127) 0:00:13.543 ******* 2026-04-01 00:46:47.391714 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-070a6fcd-e232-5822-bdac-2856eb469583', 'data_vg': 'ceph-070a6fcd-e232-5822-bdac-2856eb469583'})  2026-04-01 00:46:47.391722 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-24dba708-820d-5543-af14-6cbe38251993', 'data_vg': 'ceph-24dba708-820d-5543-af14-6cbe38251993'})  2026-04-01 00:46:47.391729 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:46:47.391736 | orchestrator | 2026-04-01 00:46:47.391744 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-04-01 00:46:47.391751 | orchestrator | Wednesday 01 April 2026 00:46:46 +0000 (0:00:00.139) 0:00:13.682 ******* 2026-04-01 00:46:47.391759 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:46:47.391766 | orchestrator | 2026-04-01 00:46:47.391774 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-04-01 00:46:47.391781 | orchestrator | Wednesday 01 April 2026 00:46:46 +0000 (0:00:00.123) 0:00:13.805 ******* 2026-04-01 00:46:47.391789 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-070a6fcd-e232-5822-bdac-2856eb469583', 'data_vg': 'ceph-070a6fcd-e232-5822-bdac-2856eb469583'})  2026-04-01 00:46:47.391796 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-24dba708-820d-5543-af14-6cbe38251993', 'data_vg': 'ceph-24dba708-820d-5543-af14-6cbe38251993'})  2026-04-01 00:46:47.391802 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:46:47.391808 | orchestrator | 2026-04-01 00:46:47.391815 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-04-01 00:46:47.391821 | orchestrator | Wednesday 01 April 2026 00:46:46 +0000 (0:00:00.134) 0:00:13.940 ******* 2026-04-01 00:46:47.391833 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-070a6fcd-e232-5822-bdac-2856eb469583', 'data_vg': 'ceph-070a6fcd-e232-5822-bdac-2856eb469583'})  2026-04-01 00:46:47.391839 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-24dba708-820d-5543-af14-6cbe38251993', 'data_vg': 'ceph-24dba708-820d-5543-af14-6cbe38251993'})  2026-04-01 00:46:47.391846 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:46:47.391852 | orchestrator | 2026-04-01 00:46:47.391858 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-04-01 00:46:47.391864 | orchestrator | Wednesday 01 April 2026 00:46:47 +0000 (0:00:00.132) 0:00:14.073 ******* 2026-04-01 00:46:47.391870 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-070a6fcd-e232-5822-bdac-2856eb469583', 'data_vg': 'ceph-070a6fcd-e232-5822-bdac-2856eb469583'})  2026-04-01 00:46:47.391877 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-24dba708-820d-5543-af14-6cbe38251993', 'data_vg': 'ceph-24dba708-820d-5543-af14-6cbe38251993'})  2026-04-01 00:46:47.391883 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:46:47.391889 | orchestrator | 2026-04-01 00:46:47.391896 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-04-01 00:46:47.391902 | orchestrator | Wednesday 01 April 2026 00:46:47 +0000 (0:00:00.137) 0:00:14.210 ******* 2026-04-01 00:46:47.391908 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:46:47.391914 | orchestrator | 2026-04-01 00:46:47.391921 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-04-01 00:46:47.391931 | orchestrator | Wednesday 01 April 2026 00:46:47 +0000 (0:00:00.127) 0:00:14.337 ******* 2026-04-01 00:46:52.854697 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:46:52.854811 | orchestrator | 2026-04-01 00:46:52.854826 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-04-01 00:46:52.854839 | orchestrator | Wednesday 01 April 2026 00:46:47 +0000 (0:00:00.104) 0:00:14.441 ******* 2026-04-01 00:46:52.854849 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:46:52.854860 | orchestrator | 2026-04-01 00:46:52.854870 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-04-01 00:46:52.854880 | orchestrator | Wednesday 01 April 2026 00:46:47 +0000 (0:00:00.120) 0:00:14.562 ******* 2026-04-01 00:46:52.854891 | orchestrator | ok: [testbed-node-3] => { 2026-04-01 00:46:52.854902 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-04-01 00:46:52.854913 | orchestrator | } 2026-04-01 00:46:52.854923 | orchestrator | 2026-04-01 00:46:52.854933 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-04-01 00:46:52.854943 | orchestrator | Wednesday 01 April 2026 00:46:47 +0000 (0:00:00.252) 0:00:14.815 ******* 2026-04-01 00:46:52.854953 | orchestrator | ok: [testbed-node-3] => { 2026-04-01 00:46:52.854963 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-04-01 00:46:52.854973 | orchestrator | } 2026-04-01 00:46:52.854986 | orchestrator | 2026-04-01 00:46:52.855003 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-04-01 00:46:52.855020 | orchestrator | Wednesday 01 April 2026 00:46:47 +0000 (0:00:00.115) 0:00:14.931 ******* 2026-04-01 00:46:52.855035 | orchestrator | ok: [testbed-node-3] => { 2026-04-01 00:46:52.855060 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-04-01 00:46:52.855077 | orchestrator | } 2026-04-01 00:46:52.855093 | orchestrator | 2026-04-01 00:46:52.855108 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-04-01 00:46:52.855124 | orchestrator | Wednesday 01 April 2026 00:46:48 +0000 (0:00:00.118) 0:00:15.049 ******* 2026-04-01 00:46:52.855139 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:46:52.855155 | orchestrator | 2026-04-01 00:46:52.855182 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-04-01 00:46:52.855198 | orchestrator | Wednesday 01 April 2026 00:46:48 +0000 (0:00:00.654) 0:00:15.703 ******* 2026-04-01 00:46:52.855214 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:46:52.855258 | orchestrator | 2026-04-01 00:46:52.855277 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-04-01 00:46:52.855294 | orchestrator | Wednesday 01 April 2026 00:46:49 +0000 (0:00:00.495) 0:00:16.199 ******* 2026-04-01 00:46:52.855311 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:46:52.855327 | orchestrator | 2026-04-01 00:46:52.855344 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-04-01 00:46:52.855361 | orchestrator | Wednesday 01 April 2026 00:46:49 +0000 (0:00:00.506) 0:00:16.706 ******* 2026-04-01 00:46:52.855380 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:46:52.855397 | orchestrator | 2026-04-01 00:46:52.855415 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-04-01 00:46:52.855432 | orchestrator | Wednesday 01 April 2026 00:46:49 +0000 (0:00:00.130) 0:00:16.836 ******* 2026-04-01 00:46:52.855449 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:46:52.855466 | orchestrator | 2026-04-01 00:46:52.855485 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-04-01 00:46:52.855502 | orchestrator | Wednesday 01 April 2026 00:46:49 +0000 (0:00:00.091) 0:00:16.927 ******* 2026-04-01 00:46:52.855520 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:46:52.855538 | orchestrator | 2026-04-01 00:46:52.855556 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-04-01 00:46:52.855570 | orchestrator | Wednesday 01 April 2026 00:46:50 +0000 (0:00:00.096) 0:00:17.024 ******* 2026-04-01 00:46:52.855580 | orchestrator | ok: [testbed-node-3] => { 2026-04-01 00:46:52.855656 | orchestrator |  "vgs_report": { 2026-04-01 00:46:52.855670 | orchestrator |  "vg": [] 2026-04-01 00:46:52.855680 | orchestrator |  } 2026-04-01 00:46:52.855690 | orchestrator | } 2026-04-01 00:46:52.855699 | orchestrator | 2026-04-01 00:46:52.855709 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-04-01 00:46:52.855719 | orchestrator | Wednesday 01 April 2026 00:46:50 +0000 (0:00:00.128) 0:00:17.152 ******* 2026-04-01 00:46:52.855729 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:46:52.855738 | orchestrator | 2026-04-01 00:46:52.855748 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-04-01 00:46:52.855758 | orchestrator | Wednesday 01 April 2026 00:46:50 +0000 (0:00:00.117) 0:00:17.269 ******* 2026-04-01 00:46:52.855768 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:46:52.855778 | orchestrator | 2026-04-01 00:46:52.855787 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-04-01 00:46:52.855797 | orchestrator | Wednesday 01 April 2026 00:46:50 +0000 (0:00:00.110) 0:00:17.380 ******* 2026-04-01 00:46:52.855807 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:46:52.855817 | orchestrator | 2026-04-01 00:46:52.855826 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-04-01 00:46:52.855836 | orchestrator | Wednesday 01 April 2026 00:46:50 +0000 (0:00:00.113) 0:00:17.494 ******* 2026-04-01 00:46:52.855846 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:46:52.855855 | orchestrator | 2026-04-01 00:46:52.855865 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-04-01 00:46:52.855874 | orchestrator | Wednesday 01 April 2026 00:46:50 +0000 (0:00:00.251) 0:00:17.745 ******* 2026-04-01 00:46:52.855884 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:46:52.855894 | orchestrator | 2026-04-01 00:46:52.855903 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-04-01 00:46:52.855917 | orchestrator | Wednesday 01 April 2026 00:46:50 +0000 (0:00:00.115) 0:00:17.861 ******* 2026-04-01 00:46:52.855933 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:46:52.855950 | orchestrator | 2026-04-01 00:46:52.855967 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-04-01 00:46:52.855983 | orchestrator | Wednesday 01 April 2026 00:46:51 +0000 (0:00:00.116) 0:00:17.977 ******* 2026-04-01 00:46:52.855995 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:46:52.856004 | orchestrator | 2026-04-01 00:46:52.856014 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-04-01 00:46:52.856036 | orchestrator | Wednesday 01 April 2026 00:46:51 +0000 (0:00:00.121) 0:00:18.098 ******* 2026-04-01 00:46:52.856067 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:46:52.856077 | orchestrator | 2026-04-01 00:46:52.856087 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-04-01 00:46:52.856097 | orchestrator | Wednesday 01 April 2026 00:46:51 +0000 (0:00:00.126) 0:00:18.225 ******* 2026-04-01 00:46:52.856107 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:46:52.856116 | orchestrator | 2026-04-01 00:46:52.856126 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-04-01 00:46:52.856135 | orchestrator | Wednesday 01 April 2026 00:46:51 +0000 (0:00:00.120) 0:00:18.345 ******* 2026-04-01 00:46:52.856145 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:46:52.856155 | orchestrator | 2026-04-01 00:46:52.856164 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-04-01 00:46:52.856174 | orchestrator | Wednesday 01 April 2026 00:46:51 +0000 (0:00:00.122) 0:00:18.468 ******* 2026-04-01 00:46:52.856184 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:46:52.856193 | orchestrator | 2026-04-01 00:46:52.856203 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-04-01 00:46:52.856213 | orchestrator | Wednesday 01 April 2026 00:46:51 +0000 (0:00:00.112) 0:00:18.580 ******* 2026-04-01 00:46:52.856222 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:46:52.856232 | orchestrator | 2026-04-01 00:46:52.856242 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-04-01 00:46:52.856251 | orchestrator | Wednesday 01 April 2026 00:46:51 +0000 (0:00:00.111) 0:00:18.692 ******* 2026-04-01 00:46:52.856261 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:46:52.856271 | orchestrator | 2026-04-01 00:46:52.856280 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-04-01 00:46:52.856290 | orchestrator | Wednesday 01 April 2026 00:46:51 +0000 (0:00:00.123) 0:00:18.816 ******* 2026-04-01 00:46:52.856300 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:46:52.856309 | orchestrator | 2026-04-01 00:46:52.856326 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-04-01 00:46:52.856337 | orchestrator | Wednesday 01 April 2026 00:46:51 +0000 (0:00:00.113) 0:00:18.929 ******* 2026-04-01 00:46:52.856347 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-070a6fcd-e232-5822-bdac-2856eb469583', 'data_vg': 'ceph-070a6fcd-e232-5822-bdac-2856eb469583'})  2026-04-01 00:46:52.856359 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-24dba708-820d-5543-af14-6cbe38251993', 'data_vg': 'ceph-24dba708-820d-5543-af14-6cbe38251993'})  2026-04-01 00:46:52.856368 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:46:52.856378 | orchestrator | 2026-04-01 00:46:52.856388 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-04-01 00:46:52.856398 | orchestrator | Wednesday 01 April 2026 00:46:52 +0000 (0:00:00.123) 0:00:19.053 ******* 2026-04-01 00:46:52.856407 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-070a6fcd-e232-5822-bdac-2856eb469583', 'data_vg': 'ceph-070a6fcd-e232-5822-bdac-2856eb469583'})  2026-04-01 00:46:52.856417 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-24dba708-820d-5543-af14-6cbe38251993', 'data_vg': 'ceph-24dba708-820d-5543-af14-6cbe38251993'})  2026-04-01 00:46:52.856427 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:46:52.856437 | orchestrator | 2026-04-01 00:46:52.856446 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-04-01 00:46:52.856456 | orchestrator | Wednesday 01 April 2026 00:46:52 +0000 (0:00:00.272) 0:00:19.326 ******* 2026-04-01 00:46:52.856466 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-070a6fcd-e232-5822-bdac-2856eb469583', 'data_vg': 'ceph-070a6fcd-e232-5822-bdac-2856eb469583'})  2026-04-01 00:46:52.856476 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-24dba708-820d-5543-af14-6cbe38251993', 'data_vg': 'ceph-24dba708-820d-5543-af14-6cbe38251993'})  2026-04-01 00:46:52.856493 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:46:52.856502 | orchestrator | 2026-04-01 00:46:52.856512 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-04-01 00:46:52.856522 | orchestrator | Wednesday 01 April 2026 00:46:52 +0000 (0:00:00.139) 0:00:19.465 ******* 2026-04-01 00:46:52.856531 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-070a6fcd-e232-5822-bdac-2856eb469583', 'data_vg': 'ceph-070a6fcd-e232-5822-bdac-2856eb469583'})  2026-04-01 00:46:52.856541 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-24dba708-820d-5543-af14-6cbe38251993', 'data_vg': 'ceph-24dba708-820d-5543-af14-6cbe38251993'})  2026-04-01 00:46:52.856551 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:46:52.856561 | orchestrator | 2026-04-01 00:46:52.856570 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-04-01 00:46:52.856580 | orchestrator | Wednesday 01 April 2026 00:46:52 +0000 (0:00:00.146) 0:00:19.611 ******* 2026-04-01 00:46:52.856614 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-070a6fcd-e232-5822-bdac-2856eb469583', 'data_vg': 'ceph-070a6fcd-e232-5822-bdac-2856eb469583'})  2026-04-01 00:46:52.856626 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-24dba708-820d-5543-af14-6cbe38251993', 'data_vg': 'ceph-24dba708-820d-5543-af14-6cbe38251993'})  2026-04-01 00:46:52.856636 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:46:52.856645 | orchestrator | 2026-04-01 00:46:52.856655 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-04-01 00:46:52.856665 | orchestrator | Wednesday 01 April 2026 00:46:52 +0000 (0:00:00.134) 0:00:19.746 ******* 2026-04-01 00:46:52.856682 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-070a6fcd-e232-5822-bdac-2856eb469583', 'data_vg': 'ceph-070a6fcd-e232-5822-bdac-2856eb469583'})  2026-04-01 00:46:57.757246 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-24dba708-820d-5543-af14-6cbe38251993', 'data_vg': 'ceph-24dba708-820d-5543-af14-6cbe38251993'})  2026-04-01 00:46:57.757317 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:46:57.757324 | orchestrator | 2026-04-01 00:46:57.757330 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-04-01 00:46:57.757336 | orchestrator | Wednesday 01 April 2026 00:46:52 +0000 (0:00:00.129) 0:00:19.875 ******* 2026-04-01 00:46:57.757340 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-070a6fcd-e232-5822-bdac-2856eb469583', 'data_vg': 'ceph-070a6fcd-e232-5822-bdac-2856eb469583'})  2026-04-01 00:46:57.757345 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-24dba708-820d-5543-af14-6cbe38251993', 'data_vg': 'ceph-24dba708-820d-5543-af14-6cbe38251993'})  2026-04-01 00:46:57.757349 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:46:57.757352 | orchestrator | 2026-04-01 00:46:57.757356 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-04-01 00:46:57.757360 | orchestrator | Wednesday 01 April 2026 00:46:53 +0000 (0:00:00.125) 0:00:20.001 ******* 2026-04-01 00:46:57.757364 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-070a6fcd-e232-5822-bdac-2856eb469583', 'data_vg': 'ceph-070a6fcd-e232-5822-bdac-2856eb469583'})  2026-04-01 00:46:57.757368 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-24dba708-820d-5543-af14-6cbe38251993', 'data_vg': 'ceph-24dba708-820d-5543-af14-6cbe38251993'})  2026-04-01 00:46:57.757372 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:46:57.757376 | orchestrator | 2026-04-01 00:46:57.757380 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-04-01 00:46:57.757384 | orchestrator | Wednesday 01 April 2026 00:46:53 +0000 (0:00:00.136) 0:00:20.138 ******* 2026-04-01 00:46:57.757388 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:46:57.757393 | orchestrator | 2026-04-01 00:46:57.757397 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-04-01 00:46:57.757418 | orchestrator | Wednesday 01 April 2026 00:46:53 +0000 (0:00:00.537) 0:00:20.676 ******* 2026-04-01 00:46:57.757422 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:46:57.757426 | orchestrator | 2026-04-01 00:46:57.757430 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-04-01 00:46:57.757434 | orchestrator | Wednesday 01 April 2026 00:46:54 +0000 (0:00:00.632) 0:00:21.308 ******* 2026-04-01 00:46:57.757437 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:46:57.757441 | orchestrator | 2026-04-01 00:46:57.757445 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-04-01 00:46:57.757459 | orchestrator | Wednesday 01 April 2026 00:46:54 +0000 (0:00:00.131) 0:00:21.439 ******* 2026-04-01 00:46:57.757464 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-070a6fcd-e232-5822-bdac-2856eb469583', 'vg_name': 'ceph-070a6fcd-e232-5822-bdac-2856eb469583'}) 2026-04-01 00:46:57.757470 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-24dba708-820d-5543-af14-6cbe38251993', 'vg_name': 'ceph-24dba708-820d-5543-af14-6cbe38251993'}) 2026-04-01 00:46:57.757473 | orchestrator | 2026-04-01 00:46:57.757477 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-04-01 00:46:57.757481 | orchestrator | Wednesday 01 April 2026 00:46:54 +0000 (0:00:00.150) 0:00:21.590 ******* 2026-04-01 00:46:57.757485 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-070a6fcd-e232-5822-bdac-2856eb469583', 'data_vg': 'ceph-070a6fcd-e232-5822-bdac-2856eb469583'})  2026-04-01 00:46:57.757489 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-24dba708-820d-5543-af14-6cbe38251993', 'data_vg': 'ceph-24dba708-820d-5543-af14-6cbe38251993'})  2026-04-01 00:46:57.757493 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:46:57.757497 | orchestrator | 2026-04-01 00:46:57.757501 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-04-01 00:46:57.757505 | orchestrator | Wednesday 01 April 2026 00:46:54 +0000 (0:00:00.142) 0:00:21.732 ******* 2026-04-01 00:46:57.757509 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-070a6fcd-e232-5822-bdac-2856eb469583', 'data_vg': 'ceph-070a6fcd-e232-5822-bdac-2856eb469583'})  2026-04-01 00:46:57.757513 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-24dba708-820d-5543-af14-6cbe38251993', 'data_vg': 'ceph-24dba708-820d-5543-af14-6cbe38251993'})  2026-04-01 00:46:57.757517 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:46:57.757520 | orchestrator | 2026-04-01 00:46:57.757524 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-04-01 00:46:57.757528 | orchestrator | Wednesday 01 April 2026 00:46:55 +0000 (0:00:00.279) 0:00:22.011 ******* 2026-04-01 00:46:57.757532 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-070a6fcd-e232-5822-bdac-2856eb469583', 'data_vg': 'ceph-070a6fcd-e232-5822-bdac-2856eb469583'})  2026-04-01 00:46:57.757536 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-24dba708-820d-5543-af14-6cbe38251993', 'data_vg': 'ceph-24dba708-820d-5543-af14-6cbe38251993'})  2026-04-01 00:46:57.757540 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:46:57.757544 | orchestrator | 2026-04-01 00:46:57.757547 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-04-01 00:46:57.757551 | orchestrator | Wednesday 01 April 2026 00:46:55 +0000 (0:00:00.149) 0:00:22.160 ******* 2026-04-01 00:46:57.757565 | orchestrator | ok: [testbed-node-3] => { 2026-04-01 00:46:57.757570 | orchestrator |  "lvm_report": { 2026-04-01 00:46:57.757574 | orchestrator |  "lv": [ 2026-04-01 00:46:57.757578 | orchestrator |  { 2026-04-01 00:46:57.757642 | orchestrator |  "lv_name": "osd-block-070a6fcd-e232-5822-bdac-2856eb469583", 2026-04-01 00:46:57.757651 | orchestrator |  "vg_name": "ceph-070a6fcd-e232-5822-bdac-2856eb469583" 2026-04-01 00:46:57.757657 | orchestrator |  }, 2026-04-01 00:46:57.757663 | orchestrator |  { 2026-04-01 00:46:57.757669 | orchestrator |  "lv_name": "osd-block-24dba708-820d-5543-af14-6cbe38251993", 2026-04-01 00:46:57.757681 | orchestrator |  "vg_name": "ceph-24dba708-820d-5543-af14-6cbe38251993" 2026-04-01 00:46:57.757688 | orchestrator |  } 2026-04-01 00:46:57.757694 | orchestrator |  ], 2026-04-01 00:46:57.757700 | orchestrator |  "pv": [ 2026-04-01 00:46:57.757707 | orchestrator |  { 2026-04-01 00:46:57.757714 | orchestrator |  "pv_name": "/dev/sdb", 2026-04-01 00:46:57.757720 | orchestrator |  "vg_name": "ceph-070a6fcd-e232-5822-bdac-2856eb469583" 2026-04-01 00:46:57.757726 | orchestrator |  }, 2026-04-01 00:46:57.757731 | orchestrator |  { 2026-04-01 00:46:57.757737 | orchestrator |  "pv_name": "/dev/sdc", 2026-04-01 00:46:57.757743 | orchestrator |  "vg_name": "ceph-24dba708-820d-5543-af14-6cbe38251993" 2026-04-01 00:46:57.757749 | orchestrator |  } 2026-04-01 00:46:57.757755 | orchestrator |  ] 2026-04-01 00:46:57.757761 | orchestrator |  } 2026-04-01 00:46:57.757768 | orchestrator | } 2026-04-01 00:46:57.757774 | orchestrator | 2026-04-01 00:46:57.757780 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-04-01 00:46:57.757786 | orchestrator | 2026-04-01 00:46:57.757792 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-01 00:46:57.757803 | orchestrator | Wednesday 01 April 2026 00:46:55 +0000 (0:00:00.274) 0:00:22.434 ******* 2026-04-01 00:46:57.757810 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-04-01 00:46:57.757814 | orchestrator | 2026-04-01 00:46:57.757819 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-01 00:46:57.757823 | orchestrator | Wednesday 01 April 2026 00:46:55 +0000 (0:00:00.253) 0:00:22.688 ******* 2026-04-01 00:46:57.757828 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:46:57.757832 | orchestrator | 2026-04-01 00:46:57.757837 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:46:57.757841 | orchestrator | Wednesday 01 April 2026 00:46:55 +0000 (0:00:00.211) 0:00:22.899 ******* 2026-04-01 00:46:57.757846 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-04-01 00:46:57.757850 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-04-01 00:46:57.757855 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-04-01 00:46:57.757859 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-04-01 00:46:57.757864 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-04-01 00:46:57.757868 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-04-01 00:46:57.757873 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-04-01 00:46:57.757877 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-04-01 00:46:57.757881 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-04-01 00:46:57.757886 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-04-01 00:46:57.757891 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-04-01 00:46:57.757895 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-04-01 00:46:57.757899 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-04-01 00:46:57.757904 | orchestrator | 2026-04-01 00:46:57.757909 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:46:57.757913 | orchestrator | Wednesday 01 April 2026 00:46:56 +0000 (0:00:00.421) 0:00:23.321 ******* 2026-04-01 00:46:57.757917 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:46:57.757920 | orchestrator | 2026-04-01 00:46:57.757924 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:46:57.757932 | orchestrator | Wednesday 01 April 2026 00:46:56 +0000 (0:00:00.166) 0:00:23.488 ******* 2026-04-01 00:46:57.757936 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:46:57.757942 | orchestrator | 2026-04-01 00:46:57.757948 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:46:57.757952 | orchestrator | Wednesday 01 April 2026 00:46:56 +0000 (0:00:00.200) 0:00:23.689 ******* 2026-04-01 00:46:57.757955 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:46:57.757959 | orchestrator | 2026-04-01 00:46:57.757963 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:46:57.757967 | orchestrator | Wednesday 01 April 2026 00:46:56 +0000 (0:00:00.196) 0:00:23.885 ******* 2026-04-01 00:46:57.757971 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:46:57.757974 | orchestrator | 2026-04-01 00:46:57.757978 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:46:57.757982 | orchestrator | Wednesday 01 April 2026 00:46:57 +0000 (0:00:00.454) 0:00:24.340 ******* 2026-04-01 00:46:57.757986 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:46:57.757990 | orchestrator | 2026-04-01 00:46:57.757994 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:46:57.758000 | orchestrator | Wednesday 01 April 2026 00:46:57 +0000 (0:00:00.196) 0:00:24.537 ******* 2026-04-01 00:46:57.758005 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:46:57.758009 | orchestrator | 2026-04-01 00:46:57.758058 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:47:07.491386 | orchestrator | Wednesday 01 April 2026 00:46:57 +0000 (0:00:00.166) 0:00:24.703 ******* 2026-04-01 00:47:07.491452 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:47:07.491461 | orchestrator | 2026-04-01 00:47:07.491468 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:47:07.491474 | orchestrator | Wednesday 01 April 2026 00:46:57 +0000 (0:00:00.184) 0:00:24.887 ******* 2026-04-01 00:47:07.491480 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:47:07.491485 | orchestrator | 2026-04-01 00:47:07.491491 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:47:07.491497 | orchestrator | Wednesday 01 April 2026 00:46:58 +0000 (0:00:00.169) 0:00:25.057 ******* 2026-04-01 00:47:07.491504 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_bc455e6b-b8ba-47d8-ab01-6be8b039ad3d) 2026-04-01 00:47:07.491515 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_bc455e6b-b8ba-47d8-ab01-6be8b039ad3d) 2026-04-01 00:47:07.491525 | orchestrator | 2026-04-01 00:47:07.491534 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:47:07.491544 | orchestrator | Wednesday 01 April 2026 00:46:58 +0000 (0:00:00.374) 0:00:25.431 ******* 2026-04-01 00:47:07.491553 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_1a9aff5c-ee70-4834-ada6-16d88406b9f4) 2026-04-01 00:47:07.491562 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_1a9aff5c-ee70-4834-ada6-16d88406b9f4) 2026-04-01 00:47:07.491605 | orchestrator | 2026-04-01 00:47:07.491614 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:47:07.491634 | orchestrator | Wednesday 01 April 2026 00:46:58 +0000 (0:00:00.385) 0:00:25.817 ******* 2026-04-01 00:47:07.491644 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_93c245f0-d55e-41f5-879e-2175ba1dd005) 2026-04-01 00:47:07.491653 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_93c245f0-d55e-41f5-879e-2175ba1dd005) 2026-04-01 00:47:07.491662 | orchestrator | 2026-04-01 00:47:07.491671 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:47:07.491680 | orchestrator | Wednesday 01 April 2026 00:46:59 +0000 (0:00:00.393) 0:00:26.211 ******* 2026-04-01 00:47:07.491688 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_289dd0c3-dff8-4236-9edf-8ec702693da7) 2026-04-01 00:47:07.491714 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_289dd0c3-dff8-4236-9edf-8ec702693da7) 2026-04-01 00:47:07.491723 | orchestrator | 2026-04-01 00:47:07.491732 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:47:07.491741 | orchestrator | Wednesday 01 April 2026 00:46:59 +0000 (0:00:00.412) 0:00:26.623 ******* 2026-04-01 00:47:07.491749 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-01 00:47:07.491758 | orchestrator | 2026-04-01 00:47:07.491767 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:47:07.491776 | orchestrator | Wednesday 01 April 2026 00:46:59 +0000 (0:00:00.289) 0:00:26.912 ******* 2026-04-01 00:47:07.491785 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-04-01 00:47:07.491795 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-04-01 00:47:07.491806 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-04-01 00:47:07.491816 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-04-01 00:47:07.491825 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-04-01 00:47:07.491835 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-04-01 00:47:07.491844 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-04-01 00:47:07.491854 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-04-01 00:47:07.491863 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-04-01 00:47:07.491872 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-04-01 00:47:07.491882 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-04-01 00:47:07.491890 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-04-01 00:47:07.491900 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-04-01 00:47:07.491909 | orchestrator | 2026-04-01 00:47:07.491918 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:47:07.491928 | orchestrator | Wednesday 01 April 2026 00:47:00 +0000 (0:00:00.523) 0:00:27.436 ******* 2026-04-01 00:47:07.491938 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:47:07.491947 | orchestrator | 2026-04-01 00:47:07.491957 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:47:07.491967 | orchestrator | Wednesday 01 April 2026 00:47:00 +0000 (0:00:00.178) 0:00:27.615 ******* 2026-04-01 00:47:07.491978 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:47:07.491987 | orchestrator | 2026-04-01 00:47:07.491997 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:47:07.492008 | orchestrator | Wednesday 01 April 2026 00:47:00 +0000 (0:00:00.179) 0:00:27.794 ******* 2026-04-01 00:47:07.492018 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:47:07.492028 | orchestrator | 2026-04-01 00:47:07.492054 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:47:07.492066 | orchestrator | Wednesday 01 April 2026 00:47:01 +0000 (0:00:00.172) 0:00:27.967 ******* 2026-04-01 00:47:07.492075 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:47:07.492085 | orchestrator | 2026-04-01 00:47:07.492095 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:47:07.492104 | orchestrator | Wednesday 01 April 2026 00:47:01 +0000 (0:00:00.176) 0:00:28.143 ******* 2026-04-01 00:47:07.492114 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:47:07.492124 | orchestrator | 2026-04-01 00:47:07.492134 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:47:07.492154 | orchestrator | Wednesday 01 April 2026 00:47:01 +0000 (0:00:00.229) 0:00:28.373 ******* 2026-04-01 00:47:07.492165 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:47:07.492174 | orchestrator | 2026-04-01 00:47:07.492183 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:47:07.492193 | orchestrator | Wednesday 01 April 2026 00:47:01 +0000 (0:00:00.180) 0:00:28.553 ******* 2026-04-01 00:47:07.492203 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:47:07.492213 | orchestrator | 2026-04-01 00:47:07.492223 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:47:07.492233 | orchestrator | Wednesday 01 April 2026 00:47:01 +0000 (0:00:00.165) 0:00:28.719 ******* 2026-04-01 00:47:07.492244 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:47:07.492254 | orchestrator | 2026-04-01 00:47:07.492264 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:47:07.492273 | orchestrator | Wednesday 01 April 2026 00:47:01 +0000 (0:00:00.187) 0:00:28.907 ******* 2026-04-01 00:47:07.492284 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-04-01 00:47:07.492303 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-04-01 00:47:07.492314 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-04-01 00:47:07.492323 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-04-01 00:47:07.492332 | orchestrator | 2026-04-01 00:47:07.492341 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:47:07.492351 | orchestrator | Wednesday 01 April 2026 00:47:02 +0000 (0:00:00.766) 0:00:29.674 ******* 2026-04-01 00:47:07.492361 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:47:07.492371 | orchestrator | 2026-04-01 00:47:07.492381 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:47:07.492391 | orchestrator | Wednesday 01 April 2026 00:47:02 +0000 (0:00:00.186) 0:00:29.860 ******* 2026-04-01 00:47:07.492400 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:47:07.492410 | orchestrator | 2026-04-01 00:47:07.492420 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:47:07.492430 | orchestrator | Wednesday 01 April 2026 00:47:03 +0000 (0:00:00.190) 0:00:30.050 ******* 2026-04-01 00:47:07.492439 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:47:07.492448 | orchestrator | 2026-04-01 00:47:07.492457 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:47:07.492467 | orchestrator | Wednesday 01 April 2026 00:47:03 +0000 (0:00:00.551) 0:00:30.601 ******* 2026-04-01 00:47:07.492475 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:47:07.492483 | orchestrator | 2026-04-01 00:47:07.492492 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-04-01 00:47:07.492501 | orchestrator | Wednesday 01 April 2026 00:47:03 +0000 (0:00:00.174) 0:00:30.776 ******* 2026-04-01 00:47:07.492510 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:47:07.492520 | orchestrator | 2026-04-01 00:47:07.492529 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-04-01 00:47:07.492538 | orchestrator | Wednesday 01 April 2026 00:47:03 +0000 (0:00:00.125) 0:00:30.901 ******* 2026-04-01 00:47:07.492547 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '00bcfd13-59f0-54da-b43f-34edf6af7c7d'}}) 2026-04-01 00:47:07.492557 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2f8eedd5-4e35-5081-a67e-565e77fef082'}}) 2026-04-01 00:47:07.492566 | orchestrator | 2026-04-01 00:47:07.492625 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-04-01 00:47:07.492636 | orchestrator | Wednesday 01 April 2026 00:47:04 +0000 (0:00:00.177) 0:00:31.079 ******* 2026-04-01 00:47:07.492647 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-00bcfd13-59f0-54da-b43f-34edf6af7c7d', 'data_vg': 'ceph-00bcfd13-59f0-54da-b43f-34edf6af7c7d'}) 2026-04-01 00:47:07.492658 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-2f8eedd5-4e35-5081-a67e-565e77fef082', 'data_vg': 'ceph-2f8eedd5-4e35-5081-a67e-565e77fef082'}) 2026-04-01 00:47:07.492680 | orchestrator | 2026-04-01 00:47:07.492690 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-04-01 00:47:07.492700 | orchestrator | Wednesday 01 April 2026 00:47:06 +0000 (0:00:01.881) 0:00:32.961 ******* 2026-04-01 00:47:07.492710 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-00bcfd13-59f0-54da-b43f-34edf6af7c7d', 'data_vg': 'ceph-00bcfd13-59f0-54da-b43f-34edf6af7c7d'})  2026-04-01 00:47:07.492720 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2f8eedd5-4e35-5081-a67e-565e77fef082', 'data_vg': 'ceph-2f8eedd5-4e35-5081-a67e-565e77fef082'})  2026-04-01 00:47:07.492729 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:47:07.492738 | orchestrator | 2026-04-01 00:47:07.492746 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-04-01 00:47:07.492755 | orchestrator | Wednesday 01 April 2026 00:47:06 +0000 (0:00:00.137) 0:00:33.099 ******* 2026-04-01 00:47:07.492765 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-00bcfd13-59f0-54da-b43f-34edf6af7c7d', 'data_vg': 'ceph-00bcfd13-59f0-54da-b43f-34edf6af7c7d'}) 2026-04-01 00:47:07.492785 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-2f8eedd5-4e35-5081-a67e-565e77fef082', 'data_vg': 'ceph-2f8eedd5-4e35-5081-a67e-565e77fef082'}) 2026-04-01 00:47:12.548961 | orchestrator | 2026-04-01 00:47:12.549051 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-04-01 00:47:12.549062 | orchestrator | Wednesday 01 April 2026 00:47:07 +0000 (0:00:01.418) 0:00:34.517 ******* 2026-04-01 00:47:12.549068 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-00bcfd13-59f0-54da-b43f-34edf6af7c7d', 'data_vg': 'ceph-00bcfd13-59f0-54da-b43f-34edf6af7c7d'})  2026-04-01 00:47:12.549077 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2f8eedd5-4e35-5081-a67e-565e77fef082', 'data_vg': 'ceph-2f8eedd5-4e35-5081-a67e-565e77fef082'})  2026-04-01 00:47:12.549083 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:47:12.549091 | orchestrator | 2026-04-01 00:47:12.549097 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-04-01 00:47:12.549103 | orchestrator | Wednesday 01 April 2026 00:47:07 +0000 (0:00:00.142) 0:00:34.660 ******* 2026-04-01 00:47:12.549109 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:47:12.549115 | orchestrator | 2026-04-01 00:47:12.549121 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-04-01 00:47:12.549128 | orchestrator | Wednesday 01 April 2026 00:47:07 +0000 (0:00:00.123) 0:00:34.784 ******* 2026-04-01 00:47:12.549149 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-00bcfd13-59f0-54da-b43f-34edf6af7c7d', 'data_vg': 'ceph-00bcfd13-59f0-54da-b43f-34edf6af7c7d'})  2026-04-01 00:47:12.549155 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2f8eedd5-4e35-5081-a67e-565e77fef082', 'data_vg': 'ceph-2f8eedd5-4e35-5081-a67e-565e77fef082'})  2026-04-01 00:47:12.549161 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:47:12.549166 | orchestrator | 2026-04-01 00:47:12.549172 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-04-01 00:47:12.549178 | orchestrator | Wednesday 01 April 2026 00:47:07 +0000 (0:00:00.133) 0:00:34.917 ******* 2026-04-01 00:47:12.549184 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:47:12.549190 | orchestrator | 2026-04-01 00:47:12.549196 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-04-01 00:47:12.549201 | orchestrator | Wednesday 01 April 2026 00:47:08 +0000 (0:00:00.104) 0:00:35.022 ******* 2026-04-01 00:47:12.549207 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-00bcfd13-59f0-54da-b43f-34edf6af7c7d', 'data_vg': 'ceph-00bcfd13-59f0-54da-b43f-34edf6af7c7d'})  2026-04-01 00:47:12.549213 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2f8eedd5-4e35-5081-a67e-565e77fef082', 'data_vg': 'ceph-2f8eedd5-4e35-5081-a67e-565e77fef082'})  2026-04-01 00:47:12.549219 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:47:12.549246 | orchestrator | 2026-04-01 00:47:12.549253 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-04-01 00:47:12.549258 | orchestrator | Wednesday 01 April 2026 00:47:08 +0000 (0:00:00.135) 0:00:35.157 ******* 2026-04-01 00:47:12.549264 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:47:12.549270 | orchestrator | 2026-04-01 00:47:12.549276 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-04-01 00:47:12.549282 | orchestrator | Wednesday 01 April 2026 00:47:08 +0000 (0:00:00.267) 0:00:35.425 ******* 2026-04-01 00:47:12.549288 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-00bcfd13-59f0-54da-b43f-34edf6af7c7d', 'data_vg': 'ceph-00bcfd13-59f0-54da-b43f-34edf6af7c7d'})  2026-04-01 00:47:12.549294 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2f8eedd5-4e35-5081-a67e-565e77fef082', 'data_vg': 'ceph-2f8eedd5-4e35-5081-a67e-565e77fef082'})  2026-04-01 00:47:12.549300 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:47:12.549305 | orchestrator | 2026-04-01 00:47:12.549311 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-04-01 00:47:12.549317 | orchestrator | Wednesday 01 April 2026 00:47:08 +0000 (0:00:00.137) 0:00:35.562 ******* 2026-04-01 00:47:12.549322 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:47:12.549330 | orchestrator | 2026-04-01 00:47:12.549335 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-04-01 00:47:12.549341 | orchestrator | Wednesday 01 April 2026 00:47:08 +0000 (0:00:00.125) 0:00:35.687 ******* 2026-04-01 00:47:12.549347 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-00bcfd13-59f0-54da-b43f-34edf6af7c7d', 'data_vg': 'ceph-00bcfd13-59f0-54da-b43f-34edf6af7c7d'})  2026-04-01 00:47:12.549353 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2f8eedd5-4e35-5081-a67e-565e77fef082', 'data_vg': 'ceph-2f8eedd5-4e35-5081-a67e-565e77fef082'})  2026-04-01 00:47:12.549359 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:47:12.549365 | orchestrator | 2026-04-01 00:47:12.549371 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-04-01 00:47:12.549377 | orchestrator | Wednesday 01 April 2026 00:47:08 +0000 (0:00:00.139) 0:00:35.827 ******* 2026-04-01 00:47:12.549382 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-00bcfd13-59f0-54da-b43f-34edf6af7c7d', 'data_vg': 'ceph-00bcfd13-59f0-54da-b43f-34edf6af7c7d'})  2026-04-01 00:47:12.549388 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2f8eedd5-4e35-5081-a67e-565e77fef082', 'data_vg': 'ceph-2f8eedd5-4e35-5081-a67e-565e77fef082'})  2026-04-01 00:47:12.549394 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:47:12.549400 | orchestrator | 2026-04-01 00:47:12.549406 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-04-01 00:47:12.549429 | orchestrator | Wednesday 01 April 2026 00:47:09 +0000 (0:00:00.128) 0:00:35.956 ******* 2026-04-01 00:47:12.549435 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-00bcfd13-59f0-54da-b43f-34edf6af7c7d', 'data_vg': 'ceph-00bcfd13-59f0-54da-b43f-34edf6af7c7d'})  2026-04-01 00:47:12.549441 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2f8eedd5-4e35-5081-a67e-565e77fef082', 'data_vg': 'ceph-2f8eedd5-4e35-5081-a67e-565e77fef082'})  2026-04-01 00:47:12.549447 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:47:12.549453 | orchestrator | 2026-04-01 00:47:12.549458 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-04-01 00:47:12.549464 | orchestrator | Wednesday 01 April 2026 00:47:09 +0000 (0:00:00.135) 0:00:36.091 ******* 2026-04-01 00:47:12.549469 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:47:12.549475 | orchestrator | 2026-04-01 00:47:12.549481 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-04-01 00:47:12.549487 | orchestrator | Wednesday 01 April 2026 00:47:09 +0000 (0:00:00.138) 0:00:36.229 ******* 2026-04-01 00:47:12.549492 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:47:12.549506 | orchestrator | 2026-04-01 00:47:12.549512 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-04-01 00:47:12.549519 | orchestrator | Wednesday 01 April 2026 00:47:09 +0000 (0:00:00.117) 0:00:36.346 ******* 2026-04-01 00:47:12.549525 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:47:12.549531 | orchestrator | 2026-04-01 00:47:12.549543 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-04-01 00:47:12.549549 | orchestrator | Wednesday 01 April 2026 00:47:09 +0000 (0:00:00.119) 0:00:36.466 ******* 2026-04-01 00:47:12.549556 | orchestrator | ok: [testbed-node-4] => { 2026-04-01 00:47:12.549584 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-04-01 00:47:12.549592 | orchestrator | } 2026-04-01 00:47:12.549599 | orchestrator | 2026-04-01 00:47:12.549605 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-04-01 00:47:12.549612 | orchestrator | Wednesday 01 April 2026 00:47:09 +0000 (0:00:00.122) 0:00:36.588 ******* 2026-04-01 00:47:12.549619 | orchestrator | ok: [testbed-node-4] => { 2026-04-01 00:47:12.549627 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-04-01 00:47:12.549635 | orchestrator | } 2026-04-01 00:47:12.549643 | orchestrator | 2026-04-01 00:47:12.549651 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-04-01 00:47:12.549660 | orchestrator | Wednesday 01 April 2026 00:47:09 +0000 (0:00:00.127) 0:00:36.716 ******* 2026-04-01 00:47:12.549668 | orchestrator | ok: [testbed-node-4] => { 2026-04-01 00:47:12.549676 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-04-01 00:47:12.549683 | orchestrator | } 2026-04-01 00:47:12.549690 | orchestrator | 2026-04-01 00:47:12.549698 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-04-01 00:47:12.549705 | orchestrator | Wednesday 01 April 2026 00:47:09 +0000 (0:00:00.122) 0:00:36.839 ******* 2026-04-01 00:47:12.549713 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:47:12.549721 | orchestrator | 2026-04-01 00:47:12.549728 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-04-01 00:47:12.549736 | orchestrator | Wednesday 01 April 2026 00:47:10 +0000 (0:00:00.658) 0:00:37.498 ******* 2026-04-01 00:47:12.549744 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:47:12.549751 | orchestrator | 2026-04-01 00:47:12.549758 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-04-01 00:47:12.549766 | orchestrator | Wednesday 01 April 2026 00:47:11 +0000 (0:00:00.514) 0:00:38.013 ******* 2026-04-01 00:47:12.549773 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:47:12.549781 | orchestrator | 2026-04-01 00:47:12.549789 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-04-01 00:47:12.549796 | orchestrator | Wednesday 01 April 2026 00:47:11 +0000 (0:00:00.543) 0:00:38.557 ******* 2026-04-01 00:47:12.549804 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:47:12.549812 | orchestrator | 2026-04-01 00:47:12.549820 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-04-01 00:47:12.549828 | orchestrator | Wednesday 01 April 2026 00:47:11 +0000 (0:00:00.140) 0:00:38.697 ******* 2026-04-01 00:47:12.549835 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:47:12.549843 | orchestrator | 2026-04-01 00:47:12.549851 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-04-01 00:47:12.549859 | orchestrator | Wednesday 01 April 2026 00:47:11 +0000 (0:00:00.086) 0:00:38.784 ******* 2026-04-01 00:47:12.549867 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:47:12.549875 | orchestrator | 2026-04-01 00:47:12.549882 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-04-01 00:47:12.549890 | orchestrator | Wednesday 01 April 2026 00:47:11 +0000 (0:00:00.092) 0:00:38.876 ******* 2026-04-01 00:47:12.549897 | orchestrator | ok: [testbed-node-4] => { 2026-04-01 00:47:12.549905 | orchestrator |  "vgs_report": { 2026-04-01 00:47:12.549911 | orchestrator |  "vg": [] 2026-04-01 00:47:12.549919 | orchestrator |  } 2026-04-01 00:47:12.549926 | orchestrator | } 2026-04-01 00:47:12.549933 | orchestrator | 2026-04-01 00:47:12.549939 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-04-01 00:47:12.549956 | orchestrator | Wednesday 01 April 2026 00:47:12 +0000 (0:00:00.127) 0:00:39.004 ******* 2026-04-01 00:47:12.549963 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:47:12.549970 | orchestrator | 2026-04-01 00:47:12.549976 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-04-01 00:47:12.549983 | orchestrator | Wednesday 01 April 2026 00:47:12 +0000 (0:00:00.116) 0:00:39.120 ******* 2026-04-01 00:47:12.549990 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:47:12.549997 | orchestrator | 2026-04-01 00:47:12.550004 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-04-01 00:47:12.550012 | orchestrator | Wednesday 01 April 2026 00:47:12 +0000 (0:00:00.130) 0:00:39.251 ******* 2026-04-01 00:47:12.550078 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:47:12.550086 | orchestrator | 2026-04-01 00:47:12.550095 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-04-01 00:47:12.550102 | orchestrator | Wednesday 01 April 2026 00:47:12 +0000 (0:00:00.122) 0:00:39.374 ******* 2026-04-01 00:47:12.550110 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:47:12.550118 | orchestrator | 2026-04-01 00:47:12.550137 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-04-01 00:47:16.602543 | orchestrator | Wednesday 01 April 2026 00:47:12 +0000 (0:00:00.122) 0:00:39.496 ******* 2026-04-01 00:47:16.602677 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:47:16.602688 | orchestrator | 2026-04-01 00:47:16.602695 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-04-01 00:47:16.602703 | orchestrator | Wednesday 01 April 2026 00:47:12 +0000 (0:00:00.122) 0:00:39.619 ******* 2026-04-01 00:47:16.602711 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:47:16.602719 | orchestrator | 2026-04-01 00:47:16.602727 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-04-01 00:47:16.602734 | orchestrator | Wednesday 01 April 2026 00:47:12 +0000 (0:00:00.250) 0:00:39.869 ******* 2026-04-01 00:47:16.602741 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:47:16.602748 | orchestrator | 2026-04-01 00:47:16.602755 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-04-01 00:47:16.602762 | orchestrator | Wednesday 01 April 2026 00:47:13 +0000 (0:00:00.128) 0:00:39.997 ******* 2026-04-01 00:47:16.602773 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:47:16.602780 | orchestrator | 2026-04-01 00:47:16.602787 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-04-01 00:47:16.602795 | orchestrator | Wednesday 01 April 2026 00:47:13 +0000 (0:00:00.124) 0:00:40.121 ******* 2026-04-01 00:47:16.602802 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:47:16.602809 | orchestrator | 2026-04-01 00:47:16.602817 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-04-01 00:47:16.602825 | orchestrator | Wednesday 01 April 2026 00:47:13 +0000 (0:00:00.121) 0:00:40.243 ******* 2026-04-01 00:47:16.602832 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:47:16.602840 | orchestrator | 2026-04-01 00:47:16.602847 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-04-01 00:47:16.602854 | orchestrator | Wednesday 01 April 2026 00:47:13 +0000 (0:00:00.101) 0:00:40.344 ******* 2026-04-01 00:47:16.602862 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:47:16.602870 | orchestrator | 2026-04-01 00:47:16.602877 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-04-01 00:47:16.602885 | orchestrator | Wednesday 01 April 2026 00:47:13 +0000 (0:00:00.117) 0:00:40.461 ******* 2026-04-01 00:47:16.602892 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:47:16.602900 | orchestrator | 2026-04-01 00:47:16.602907 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-04-01 00:47:16.602931 | orchestrator | Wednesday 01 April 2026 00:47:13 +0000 (0:00:00.125) 0:00:40.587 ******* 2026-04-01 00:47:16.602939 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:47:16.602946 | orchestrator | 2026-04-01 00:47:16.602953 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-04-01 00:47:16.602979 | orchestrator | Wednesday 01 April 2026 00:47:13 +0000 (0:00:00.111) 0:00:40.699 ******* 2026-04-01 00:47:16.602987 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:47:16.602995 | orchestrator | 2026-04-01 00:47:16.603001 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-04-01 00:47:16.603007 | orchestrator | Wednesday 01 April 2026 00:47:13 +0000 (0:00:00.128) 0:00:40.828 ******* 2026-04-01 00:47:16.603015 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-00bcfd13-59f0-54da-b43f-34edf6af7c7d', 'data_vg': 'ceph-00bcfd13-59f0-54da-b43f-34edf6af7c7d'})  2026-04-01 00:47:16.603024 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2f8eedd5-4e35-5081-a67e-565e77fef082', 'data_vg': 'ceph-2f8eedd5-4e35-5081-a67e-565e77fef082'})  2026-04-01 00:47:16.603032 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:47:16.603039 | orchestrator | 2026-04-01 00:47:16.603046 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-04-01 00:47:16.603054 | orchestrator | Wednesday 01 April 2026 00:47:14 +0000 (0:00:00.134) 0:00:40.962 ******* 2026-04-01 00:47:16.603061 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-00bcfd13-59f0-54da-b43f-34edf6af7c7d', 'data_vg': 'ceph-00bcfd13-59f0-54da-b43f-34edf6af7c7d'})  2026-04-01 00:47:16.603069 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2f8eedd5-4e35-5081-a67e-565e77fef082', 'data_vg': 'ceph-2f8eedd5-4e35-5081-a67e-565e77fef082'})  2026-04-01 00:47:16.603076 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:47:16.603082 | orchestrator | 2026-04-01 00:47:16.603090 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-04-01 00:47:16.603099 | orchestrator | Wednesday 01 April 2026 00:47:14 +0000 (0:00:00.135) 0:00:41.097 ******* 2026-04-01 00:47:16.603108 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-00bcfd13-59f0-54da-b43f-34edf6af7c7d', 'data_vg': 'ceph-00bcfd13-59f0-54da-b43f-34edf6af7c7d'})  2026-04-01 00:47:16.603117 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2f8eedd5-4e35-5081-a67e-565e77fef082', 'data_vg': 'ceph-2f8eedd5-4e35-5081-a67e-565e77fef082'})  2026-04-01 00:47:16.603139 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:47:16.603147 | orchestrator | 2026-04-01 00:47:16.603155 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-04-01 00:47:16.603164 | orchestrator | Wednesday 01 April 2026 00:47:14 +0000 (0:00:00.142) 0:00:41.240 ******* 2026-04-01 00:47:16.603174 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-00bcfd13-59f0-54da-b43f-34edf6af7c7d', 'data_vg': 'ceph-00bcfd13-59f0-54da-b43f-34edf6af7c7d'})  2026-04-01 00:47:16.603183 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2f8eedd5-4e35-5081-a67e-565e77fef082', 'data_vg': 'ceph-2f8eedd5-4e35-5081-a67e-565e77fef082'})  2026-04-01 00:47:16.603192 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:47:16.603201 | orchestrator | 2026-04-01 00:47:16.603226 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-04-01 00:47:16.603236 | orchestrator | Wednesday 01 April 2026 00:47:14 +0000 (0:00:00.263) 0:00:41.504 ******* 2026-04-01 00:47:16.603256 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-00bcfd13-59f0-54da-b43f-34edf6af7c7d', 'data_vg': 'ceph-00bcfd13-59f0-54da-b43f-34edf6af7c7d'})  2026-04-01 00:47:16.603265 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2f8eedd5-4e35-5081-a67e-565e77fef082', 'data_vg': 'ceph-2f8eedd5-4e35-5081-a67e-565e77fef082'})  2026-04-01 00:47:16.603273 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:47:16.603281 | orchestrator | 2026-04-01 00:47:16.603289 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-04-01 00:47:16.603297 | orchestrator | Wednesday 01 April 2026 00:47:14 +0000 (0:00:00.138) 0:00:41.642 ******* 2026-04-01 00:47:16.603303 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-00bcfd13-59f0-54da-b43f-34edf6af7c7d', 'data_vg': 'ceph-00bcfd13-59f0-54da-b43f-34edf6af7c7d'})  2026-04-01 00:47:16.603320 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2f8eedd5-4e35-5081-a67e-565e77fef082', 'data_vg': 'ceph-2f8eedd5-4e35-5081-a67e-565e77fef082'})  2026-04-01 00:47:16.603328 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:47:16.603336 | orchestrator | 2026-04-01 00:47:16.603344 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-04-01 00:47:16.603352 | orchestrator | Wednesday 01 April 2026 00:47:14 +0000 (0:00:00.119) 0:00:41.762 ******* 2026-04-01 00:47:16.603361 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-00bcfd13-59f0-54da-b43f-34edf6af7c7d', 'data_vg': 'ceph-00bcfd13-59f0-54da-b43f-34edf6af7c7d'})  2026-04-01 00:47:16.603369 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2f8eedd5-4e35-5081-a67e-565e77fef082', 'data_vg': 'ceph-2f8eedd5-4e35-5081-a67e-565e77fef082'})  2026-04-01 00:47:16.603377 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:47:16.603385 | orchestrator | 2026-04-01 00:47:16.603394 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-04-01 00:47:16.603401 | orchestrator | Wednesday 01 April 2026 00:47:14 +0000 (0:00:00.127) 0:00:41.889 ******* 2026-04-01 00:47:16.603407 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-00bcfd13-59f0-54da-b43f-34edf6af7c7d', 'data_vg': 'ceph-00bcfd13-59f0-54da-b43f-34edf6af7c7d'})  2026-04-01 00:47:16.603414 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2f8eedd5-4e35-5081-a67e-565e77fef082', 'data_vg': 'ceph-2f8eedd5-4e35-5081-a67e-565e77fef082'})  2026-04-01 00:47:16.603422 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:47:16.603428 | orchestrator | 2026-04-01 00:47:16.603434 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-04-01 00:47:16.603441 | orchestrator | Wednesday 01 April 2026 00:47:15 +0000 (0:00:00.145) 0:00:42.034 ******* 2026-04-01 00:47:16.603448 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:47:16.603456 | orchestrator | 2026-04-01 00:47:16.603463 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-04-01 00:47:16.603470 | orchestrator | Wednesday 01 April 2026 00:47:15 +0000 (0:00:00.510) 0:00:42.545 ******* 2026-04-01 00:47:16.603477 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:47:16.603484 | orchestrator | 2026-04-01 00:47:16.603491 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-04-01 00:47:16.603498 | orchestrator | Wednesday 01 April 2026 00:47:16 +0000 (0:00:00.521) 0:00:43.067 ******* 2026-04-01 00:47:16.603505 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:47:16.603512 | orchestrator | 2026-04-01 00:47:16.603519 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-04-01 00:47:16.603526 | orchestrator | Wednesday 01 April 2026 00:47:16 +0000 (0:00:00.145) 0:00:43.213 ******* 2026-04-01 00:47:16.603533 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-00bcfd13-59f0-54da-b43f-34edf6af7c7d', 'vg_name': 'ceph-00bcfd13-59f0-54da-b43f-34edf6af7c7d'}) 2026-04-01 00:47:16.603541 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-2f8eedd5-4e35-5081-a67e-565e77fef082', 'vg_name': 'ceph-2f8eedd5-4e35-5081-a67e-565e77fef082'}) 2026-04-01 00:47:16.603548 | orchestrator | 2026-04-01 00:47:16.603555 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-04-01 00:47:16.603609 | orchestrator | Wednesday 01 April 2026 00:47:16 +0000 (0:00:00.158) 0:00:43.372 ******* 2026-04-01 00:47:16.603617 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-00bcfd13-59f0-54da-b43f-34edf6af7c7d', 'data_vg': 'ceph-00bcfd13-59f0-54da-b43f-34edf6af7c7d'})  2026-04-01 00:47:16.603623 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2f8eedd5-4e35-5081-a67e-565e77fef082', 'data_vg': 'ceph-2f8eedd5-4e35-5081-a67e-565e77fef082'})  2026-04-01 00:47:16.603630 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:47:16.603637 | orchestrator | 2026-04-01 00:47:16.603650 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-04-01 00:47:16.603658 | orchestrator | Wednesday 01 April 2026 00:47:16 +0000 (0:00:00.117) 0:00:43.489 ******* 2026-04-01 00:47:16.603665 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-00bcfd13-59f0-54da-b43f-34edf6af7c7d', 'data_vg': 'ceph-00bcfd13-59f0-54da-b43f-34edf6af7c7d'})  2026-04-01 00:47:16.603679 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2f8eedd5-4e35-5081-a67e-565e77fef082', 'data_vg': 'ceph-2f8eedd5-4e35-5081-a67e-565e77fef082'})  2026-04-01 00:47:21.956228 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:47:21.956325 | orchestrator | 2026-04-01 00:47:21.956338 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-04-01 00:47:21.956347 | orchestrator | Wednesday 01 April 2026 00:47:16 +0000 (0:00:00.135) 0:00:43.625 ******* 2026-04-01 00:47:21.956356 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-00bcfd13-59f0-54da-b43f-34edf6af7c7d', 'data_vg': 'ceph-00bcfd13-59f0-54da-b43f-34edf6af7c7d'})  2026-04-01 00:47:21.956365 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2f8eedd5-4e35-5081-a67e-565e77fef082', 'data_vg': 'ceph-2f8eedd5-4e35-5081-a67e-565e77fef082'})  2026-04-01 00:47:21.956372 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:47:21.956378 | orchestrator | 2026-04-01 00:47:21.956385 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-04-01 00:47:21.956392 | orchestrator | Wednesday 01 April 2026 00:47:16 +0000 (0:00:00.125) 0:00:43.750 ******* 2026-04-01 00:47:21.956398 | orchestrator | ok: [testbed-node-4] => { 2026-04-01 00:47:21.956406 | orchestrator |  "lvm_report": { 2026-04-01 00:47:21.956415 | orchestrator |  "lv": [ 2026-04-01 00:47:21.956423 | orchestrator |  { 2026-04-01 00:47:21.956446 | orchestrator |  "lv_name": "osd-block-00bcfd13-59f0-54da-b43f-34edf6af7c7d", 2026-04-01 00:47:21.956455 | orchestrator |  "vg_name": "ceph-00bcfd13-59f0-54da-b43f-34edf6af7c7d" 2026-04-01 00:47:21.956461 | orchestrator |  }, 2026-04-01 00:47:21.956467 | orchestrator |  { 2026-04-01 00:47:21.956474 | orchestrator |  "lv_name": "osd-block-2f8eedd5-4e35-5081-a67e-565e77fef082", 2026-04-01 00:47:21.956479 | orchestrator |  "vg_name": "ceph-2f8eedd5-4e35-5081-a67e-565e77fef082" 2026-04-01 00:47:21.956486 | orchestrator |  } 2026-04-01 00:47:21.956491 | orchestrator |  ], 2026-04-01 00:47:21.956498 | orchestrator |  "pv": [ 2026-04-01 00:47:21.956504 | orchestrator |  { 2026-04-01 00:47:21.956511 | orchestrator |  "pv_name": "/dev/sdb", 2026-04-01 00:47:21.956517 | orchestrator |  "vg_name": "ceph-00bcfd13-59f0-54da-b43f-34edf6af7c7d" 2026-04-01 00:47:21.956523 | orchestrator |  }, 2026-04-01 00:47:21.956530 | orchestrator |  { 2026-04-01 00:47:21.956536 | orchestrator |  "pv_name": "/dev/sdc", 2026-04-01 00:47:21.956547 | orchestrator |  "vg_name": "ceph-2f8eedd5-4e35-5081-a67e-565e77fef082" 2026-04-01 00:47:21.956578 | orchestrator |  } 2026-04-01 00:47:21.956585 | orchestrator |  ] 2026-04-01 00:47:21.956591 | orchestrator |  } 2026-04-01 00:47:21.956597 | orchestrator | } 2026-04-01 00:47:21.956603 | orchestrator | 2026-04-01 00:47:21.956609 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-04-01 00:47:21.956616 | orchestrator | 2026-04-01 00:47:21.956621 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-01 00:47:21.956627 | orchestrator | Wednesday 01 April 2026 00:47:17 +0000 (0:00:00.407) 0:00:44.158 ******* 2026-04-01 00:47:21.956633 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-04-01 00:47:21.956639 | orchestrator | 2026-04-01 00:47:21.956645 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-01 00:47:21.956651 | orchestrator | Wednesday 01 April 2026 00:47:17 +0000 (0:00:00.216) 0:00:44.375 ******* 2026-04-01 00:47:21.956657 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:47:21.956687 | orchestrator | 2026-04-01 00:47:21.956696 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:47:21.956702 | orchestrator | Wednesday 01 April 2026 00:47:17 +0000 (0:00:00.202) 0:00:44.578 ******* 2026-04-01 00:47:21.956708 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-04-01 00:47:21.956714 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-04-01 00:47:21.956720 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-04-01 00:47:21.956726 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-04-01 00:47:21.956736 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-04-01 00:47:21.956742 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-04-01 00:47:21.956748 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-04-01 00:47:21.956755 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-04-01 00:47:21.956761 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-04-01 00:47:21.956768 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-04-01 00:47:21.956774 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-04-01 00:47:21.956781 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-04-01 00:47:21.956787 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-04-01 00:47:21.956794 | orchestrator | 2026-04-01 00:47:21.956800 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:47:21.956807 | orchestrator | Wednesday 01 April 2026 00:47:17 +0000 (0:00:00.359) 0:00:44.937 ******* 2026-04-01 00:47:21.956813 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:47:21.956821 | orchestrator | 2026-04-01 00:47:21.956826 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:47:21.956830 | orchestrator | Wednesday 01 April 2026 00:47:18 +0000 (0:00:00.184) 0:00:45.122 ******* 2026-04-01 00:47:21.956835 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:47:21.956840 | orchestrator | 2026-04-01 00:47:21.956844 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:47:21.956862 | orchestrator | Wednesday 01 April 2026 00:47:18 +0000 (0:00:00.180) 0:00:45.302 ******* 2026-04-01 00:47:21.956867 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:47:21.956872 | orchestrator | 2026-04-01 00:47:21.956876 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:47:21.956882 | orchestrator | Wednesday 01 April 2026 00:47:18 +0000 (0:00:00.177) 0:00:45.480 ******* 2026-04-01 00:47:21.956888 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:47:21.956895 | orchestrator | 2026-04-01 00:47:21.956900 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:47:21.956905 | orchestrator | Wednesday 01 April 2026 00:47:18 +0000 (0:00:00.181) 0:00:45.661 ******* 2026-04-01 00:47:21.956910 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:47:21.956914 | orchestrator | 2026-04-01 00:47:21.956918 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:47:21.956923 | orchestrator | Wednesday 01 April 2026 00:47:18 +0000 (0:00:00.177) 0:00:45.838 ******* 2026-04-01 00:47:21.956927 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:47:21.956932 | orchestrator | 2026-04-01 00:47:21.956936 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:47:21.956941 | orchestrator | Wednesday 01 April 2026 00:47:19 +0000 (0:00:00.480) 0:00:46.318 ******* 2026-04-01 00:47:21.956946 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:47:21.956951 | orchestrator | 2026-04-01 00:47:21.956961 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:47:21.956966 | orchestrator | Wednesday 01 April 2026 00:47:19 +0000 (0:00:00.178) 0:00:46.497 ******* 2026-04-01 00:47:21.956970 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:47:21.956974 | orchestrator | 2026-04-01 00:47:21.956978 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:47:21.956982 | orchestrator | Wednesday 01 April 2026 00:47:19 +0000 (0:00:00.172) 0:00:46.670 ******* 2026-04-01 00:47:21.956986 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_bb71c8c2-ca64-4a55-b962-663cadefaf49) 2026-04-01 00:47:21.956991 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_bb71c8c2-ca64-4a55-b962-663cadefaf49) 2026-04-01 00:47:21.956995 | orchestrator | 2026-04-01 00:47:21.956999 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:47:21.957003 | orchestrator | Wednesday 01 April 2026 00:47:20 +0000 (0:00:00.364) 0:00:47.034 ******* 2026-04-01 00:47:21.957006 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_e5794c61-1895-432b-bae0-e64b20adb363) 2026-04-01 00:47:21.957010 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_e5794c61-1895-432b-bae0-e64b20adb363) 2026-04-01 00:47:21.957014 | orchestrator | 2026-04-01 00:47:21.957018 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:47:21.957022 | orchestrator | Wednesday 01 April 2026 00:47:20 +0000 (0:00:00.410) 0:00:47.444 ******* 2026-04-01 00:47:21.957025 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_b090d968-077f-4316-a7cb-bda539f6db67) 2026-04-01 00:47:21.957029 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_b090d968-077f-4316-a7cb-bda539f6db67) 2026-04-01 00:47:21.957033 | orchestrator | 2026-04-01 00:47:21.957037 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:47:21.957041 | orchestrator | Wednesday 01 April 2026 00:47:20 +0000 (0:00:00.414) 0:00:47.859 ******* 2026-04-01 00:47:21.957045 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_ac6b0a42-475e-47b3-b6b9-8775ae6256f7) 2026-04-01 00:47:21.957048 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_ac6b0a42-475e-47b3-b6b9-8775ae6256f7) 2026-04-01 00:47:21.957052 | orchestrator | 2026-04-01 00:47:21.957056 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-01 00:47:21.957060 | orchestrator | Wednesday 01 April 2026 00:47:21 +0000 (0:00:00.418) 0:00:48.277 ******* 2026-04-01 00:47:21.957064 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-01 00:47:21.957068 | orchestrator | 2026-04-01 00:47:21.957072 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:47:21.957075 | orchestrator | Wednesday 01 April 2026 00:47:21 +0000 (0:00:00.318) 0:00:48.596 ******* 2026-04-01 00:47:21.957080 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-04-01 00:47:21.957086 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-04-01 00:47:21.957092 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-04-01 00:47:21.957100 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-04-01 00:47:21.957109 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-04-01 00:47:21.957114 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-04-01 00:47:21.957120 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-04-01 00:47:21.957126 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-04-01 00:47:21.957132 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-04-01 00:47:21.957178 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-04-01 00:47:21.957183 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-04-01 00:47:21.957192 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-04-01 00:47:29.938486 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-04-01 00:47:29.938629 | orchestrator | 2026-04-01 00:47:29.938640 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:47:29.938645 | orchestrator | Wednesday 01 April 2026 00:47:22 +0000 (0:00:00.392) 0:00:48.989 ******* 2026-04-01 00:47:29.938649 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:47:29.938653 | orchestrator | 2026-04-01 00:47:29.938657 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:47:29.938661 | orchestrator | Wednesday 01 April 2026 00:47:22 +0000 (0:00:00.212) 0:00:49.202 ******* 2026-04-01 00:47:29.938665 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:47:29.938669 | orchestrator | 2026-04-01 00:47:29.938672 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:47:29.938676 | orchestrator | Wednesday 01 April 2026 00:47:22 +0000 (0:00:00.201) 0:00:49.404 ******* 2026-04-01 00:47:29.938680 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:47:29.938684 | orchestrator | 2026-04-01 00:47:29.938694 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:47:29.938706 | orchestrator | Wednesday 01 April 2026 00:47:22 +0000 (0:00:00.502) 0:00:49.906 ******* 2026-04-01 00:47:29.938710 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:47:29.938714 | orchestrator | 2026-04-01 00:47:29.938718 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:47:29.938721 | orchestrator | Wednesday 01 April 2026 00:47:23 +0000 (0:00:00.191) 0:00:50.098 ******* 2026-04-01 00:47:29.938725 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:47:29.938728 | orchestrator | 2026-04-01 00:47:29.938732 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:47:29.938736 | orchestrator | Wednesday 01 April 2026 00:47:23 +0000 (0:00:00.222) 0:00:50.320 ******* 2026-04-01 00:47:29.938739 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:47:29.938743 | orchestrator | 2026-04-01 00:47:29.938747 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:47:29.938751 | orchestrator | Wednesday 01 April 2026 00:47:23 +0000 (0:00:00.193) 0:00:50.513 ******* 2026-04-01 00:47:29.938755 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:47:29.938758 | orchestrator | 2026-04-01 00:47:29.938762 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:47:29.938766 | orchestrator | Wednesday 01 April 2026 00:47:23 +0000 (0:00:00.170) 0:00:50.684 ******* 2026-04-01 00:47:29.938770 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:47:29.938773 | orchestrator | 2026-04-01 00:47:29.938777 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:47:29.938781 | orchestrator | Wednesday 01 April 2026 00:47:23 +0000 (0:00:00.222) 0:00:50.907 ******* 2026-04-01 00:47:29.938785 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-04-01 00:47:29.938789 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-04-01 00:47:29.938793 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-04-01 00:47:29.938798 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-04-01 00:47:29.938803 | orchestrator | 2026-04-01 00:47:29.938811 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:47:29.938817 | orchestrator | Wednesday 01 April 2026 00:47:24 +0000 (0:00:00.581) 0:00:51.488 ******* 2026-04-01 00:47:29.938822 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:47:29.938833 | orchestrator | 2026-04-01 00:47:29.938840 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:47:29.938846 | orchestrator | Wednesday 01 April 2026 00:47:24 +0000 (0:00:00.175) 0:00:51.664 ******* 2026-04-01 00:47:29.938865 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:47:29.938872 | orchestrator | 2026-04-01 00:47:29.938878 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:47:29.938883 | orchestrator | Wednesday 01 April 2026 00:47:24 +0000 (0:00:00.171) 0:00:51.835 ******* 2026-04-01 00:47:29.938888 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:47:29.938894 | orchestrator | 2026-04-01 00:47:29.938899 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-01 00:47:29.938904 | orchestrator | Wednesday 01 April 2026 00:47:25 +0000 (0:00:00.170) 0:00:52.006 ******* 2026-04-01 00:47:29.938910 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:47:29.938915 | orchestrator | 2026-04-01 00:47:29.938920 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-04-01 00:47:29.938926 | orchestrator | Wednesday 01 April 2026 00:47:25 +0000 (0:00:00.186) 0:00:52.192 ******* 2026-04-01 00:47:29.938929 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:47:29.938932 | orchestrator | 2026-04-01 00:47:29.938936 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-04-01 00:47:29.938939 | orchestrator | Wednesday 01 April 2026 00:47:25 +0000 (0:00:00.129) 0:00:52.322 ******* 2026-04-01 00:47:29.938942 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c7c10550-c1bc-5fe3-90d5-7d7a9167f51f'}}) 2026-04-01 00:47:29.938946 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd3162267-511d-5f73-a1c4-60a47e452e5f'}}) 2026-04-01 00:47:29.938949 | orchestrator | 2026-04-01 00:47:29.938952 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-04-01 00:47:29.938955 | orchestrator | Wednesday 01 April 2026 00:47:25 +0000 (0:00:00.369) 0:00:52.691 ******* 2026-04-01 00:47:29.938959 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-c7c10550-c1bc-5fe3-90d5-7d7a9167f51f', 'data_vg': 'ceph-c7c10550-c1bc-5fe3-90d5-7d7a9167f51f'}) 2026-04-01 00:47:29.938963 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-d3162267-511d-5f73-a1c4-60a47e452e5f', 'data_vg': 'ceph-d3162267-511d-5f73-a1c4-60a47e452e5f'}) 2026-04-01 00:47:29.938966 | orchestrator | 2026-04-01 00:47:29.938969 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-04-01 00:47:29.938982 | orchestrator | Wednesday 01 April 2026 00:47:27 +0000 (0:00:01.698) 0:00:54.390 ******* 2026-04-01 00:47:29.938986 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c7c10550-c1bc-5fe3-90d5-7d7a9167f51f', 'data_vg': 'ceph-c7c10550-c1bc-5fe3-90d5-7d7a9167f51f'})  2026-04-01 00:47:29.938990 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d3162267-511d-5f73-a1c4-60a47e452e5f', 'data_vg': 'ceph-d3162267-511d-5f73-a1c4-60a47e452e5f'})  2026-04-01 00:47:29.938993 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:47:29.939002 | orchestrator | 2026-04-01 00:47:29.939005 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-04-01 00:47:29.939008 | orchestrator | Wednesday 01 April 2026 00:47:27 +0000 (0:00:00.135) 0:00:54.525 ******* 2026-04-01 00:47:29.939011 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-c7c10550-c1bc-5fe3-90d5-7d7a9167f51f', 'data_vg': 'ceph-c7c10550-c1bc-5fe3-90d5-7d7a9167f51f'}) 2026-04-01 00:47:29.939018 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-d3162267-511d-5f73-a1c4-60a47e452e5f', 'data_vg': 'ceph-d3162267-511d-5f73-a1c4-60a47e452e5f'}) 2026-04-01 00:47:29.939021 | orchestrator | 2026-04-01 00:47:29.939024 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-04-01 00:47:29.939027 | orchestrator | Wednesday 01 April 2026 00:47:28 +0000 (0:00:01.141) 0:00:55.666 ******* 2026-04-01 00:47:29.939030 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c7c10550-c1bc-5fe3-90d5-7d7a9167f51f', 'data_vg': 'ceph-c7c10550-c1bc-5fe3-90d5-7d7a9167f51f'})  2026-04-01 00:47:29.939034 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d3162267-511d-5f73-a1c4-60a47e452e5f', 'data_vg': 'ceph-d3162267-511d-5f73-a1c4-60a47e452e5f'})  2026-04-01 00:47:29.939041 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:47:29.939044 | orchestrator | 2026-04-01 00:47:29.939047 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-04-01 00:47:29.939050 | orchestrator | Wednesday 01 April 2026 00:47:28 +0000 (0:00:00.164) 0:00:55.831 ******* 2026-04-01 00:47:29.939053 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:47:29.939056 | orchestrator | 2026-04-01 00:47:29.939060 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-04-01 00:47:29.939063 | orchestrator | Wednesday 01 April 2026 00:47:29 +0000 (0:00:00.126) 0:00:55.957 ******* 2026-04-01 00:47:29.939066 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c7c10550-c1bc-5fe3-90d5-7d7a9167f51f', 'data_vg': 'ceph-c7c10550-c1bc-5fe3-90d5-7d7a9167f51f'})  2026-04-01 00:47:29.939069 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d3162267-511d-5f73-a1c4-60a47e452e5f', 'data_vg': 'ceph-d3162267-511d-5f73-a1c4-60a47e452e5f'})  2026-04-01 00:47:29.939073 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:47:29.939076 | orchestrator | 2026-04-01 00:47:29.939079 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-04-01 00:47:29.939082 | orchestrator | Wednesday 01 April 2026 00:47:29 +0000 (0:00:00.157) 0:00:56.115 ******* 2026-04-01 00:47:29.939085 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:47:29.939088 | orchestrator | 2026-04-01 00:47:29.939092 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-04-01 00:47:29.939095 | orchestrator | Wednesday 01 April 2026 00:47:29 +0000 (0:00:00.136) 0:00:56.252 ******* 2026-04-01 00:47:29.939098 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c7c10550-c1bc-5fe3-90d5-7d7a9167f51f', 'data_vg': 'ceph-c7c10550-c1bc-5fe3-90d5-7d7a9167f51f'})  2026-04-01 00:47:29.939101 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d3162267-511d-5f73-a1c4-60a47e452e5f', 'data_vg': 'ceph-d3162267-511d-5f73-a1c4-60a47e452e5f'})  2026-04-01 00:47:29.939105 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:47:29.939108 | orchestrator | 2026-04-01 00:47:29.939111 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-04-01 00:47:29.939114 | orchestrator | Wednesday 01 April 2026 00:47:29 +0000 (0:00:00.145) 0:00:56.398 ******* 2026-04-01 00:47:29.939117 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:47:29.939121 | orchestrator | 2026-04-01 00:47:29.939124 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-04-01 00:47:29.939127 | orchestrator | Wednesday 01 April 2026 00:47:29 +0000 (0:00:00.132) 0:00:56.531 ******* 2026-04-01 00:47:29.939130 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c7c10550-c1bc-5fe3-90d5-7d7a9167f51f', 'data_vg': 'ceph-c7c10550-c1bc-5fe3-90d5-7d7a9167f51f'})  2026-04-01 00:47:29.939133 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d3162267-511d-5f73-a1c4-60a47e452e5f', 'data_vg': 'ceph-d3162267-511d-5f73-a1c4-60a47e452e5f'})  2026-04-01 00:47:29.939137 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:47:29.939140 | orchestrator | 2026-04-01 00:47:29.939143 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-04-01 00:47:29.939146 | orchestrator | Wednesday 01 April 2026 00:47:29 +0000 (0:00:00.148) 0:00:56.680 ******* 2026-04-01 00:47:29.939149 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:47:29.939153 | orchestrator | 2026-04-01 00:47:29.939156 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-04-01 00:47:29.939159 | orchestrator | Wednesday 01 April 2026 00:47:29 +0000 (0:00:00.141) 0:00:56.821 ******* 2026-04-01 00:47:29.939166 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c7c10550-c1bc-5fe3-90d5-7d7a9167f51f', 'data_vg': 'ceph-c7c10550-c1bc-5fe3-90d5-7d7a9167f51f'})  2026-04-01 00:47:36.145008 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d3162267-511d-5f73-a1c4-60a47e452e5f', 'data_vg': 'ceph-d3162267-511d-5f73-a1c4-60a47e452e5f'})  2026-04-01 00:47:36.145081 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:47:36.145090 | orchestrator | 2026-04-01 00:47:36.145096 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-04-01 00:47:36.145102 | orchestrator | Wednesday 01 April 2026 00:47:30 +0000 (0:00:00.370) 0:00:57.192 ******* 2026-04-01 00:47:36.145107 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c7c10550-c1bc-5fe3-90d5-7d7a9167f51f', 'data_vg': 'ceph-c7c10550-c1bc-5fe3-90d5-7d7a9167f51f'})  2026-04-01 00:47:36.145113 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d3162267-511d-5f73-a1c4-60a47e452e5f', 'data_vg': 'ceph-d3162267-511d-5f73-a1c4-60a47e452e5f'})  2026-04-01 00:47:36.145118 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:47:36.145123 | orchestrator | 2026-04-01 00:47:36.145128 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-04-01 00:47:36.145140 | orchestrator | Wednesday 01 April 2026 00:47:30 +0000 (0:00:00.149) 0:00:57.342 ******* 2026-04-01 00:47:36.145145 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c7c10550-c1bc-5fe3-90d5-7d7a9167f51f', 'data_vg': 'ceph-c7c10550-c1bc-5fe3-90d5-7d7a9167f51f'})  2026-04-01 00:47:36.145150 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d3162267-511d-5f73-a1c4-60a47e452e5f', 'data_vg': 'ceph-d3162267-511d-5f73-a1c4-60a47e452e5f'})  2026-04-01 00:47:36.145155 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:47:36.145161 | orchestrator | 2026-04-01 00:47:36.145165 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-04-01 00:47:36.145170 | orchestrator | Wednesday 01 April 2026 00:47:30 +0000 (0:00:00.133) 0:00:57.475 ******* 2026-04-01 00:47:36.145175 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:47:36.145181 | orchestrator | 2026-04-01 00:47:36.145186 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-04-01 00:47:36.145191 | orchestrator | Wednesday 01 April 2026 00:47:30 +0000 (0:00:00.135) 0:00:57.610 ******* 2026-04-01 00:47:36.145196 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:47:36.145202 | orchestrator | 2026-04-01 00:47:36.145207 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-04-01 00:47:36.145212 | orchestrator | Wednesday 01 April 2026 00:47:30 +0000 (0:00:00.132) 0:00:57.743 ******* 2026-04-01 00:47:36.145217 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:47:36.145222 | orchestrator | 2026-04-01 00:47:36.145228 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-04-01 00:47:36.145233 | orchestrator | Wednesday 01 April 2026 00:47:30 +0000 (0:00:00.133) 0:00:57.877 ******* 2026-04-01 00:47:36.145238 | orchestrator | ok: [testbed-node-5] => { 2026-04-01 00:47:36.145244 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-04-01 00:47:36.145250 | orchestrator | } 2026-04-01 00:47:36.145255 | orchestrator | 2026-04-01 00:47:36.145260 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-04-01 00:47:36.145264 | orchestrator | Wednesday 01 April 2026 00:47:31 +0000 (0:00:00.126) 0:00:58.003 ******* 2026-04-01 00:47:36.145269 | orchestrator | ok: [testbed-node-5] => { 2026-04-01 00:47:36.145274 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-04-01 00:47:36.145279 | orchestrator | } 2026-04-01 00:47:36.145284 | orchestrator | 2026-04-01 00:47:36.145290 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-04-01 00:47:36.145295 | orchestrator | Wednesday 01 April 2026 00:47:31 +0000 (0:00:00.139) 0:00:58.142 ******* 2026-04-01 00:47:36.145300 | orchestrator | ok: [testbed-node-5] => { 2026-04-01 00:47:36.145305 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-04-01 00:47:36.145310 | orchestrator | } 2026-04-01 00:47:36.145315 | orchestrator | 2026-04-01 00:47:36.145320 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-04-01 00:47:36.145332 | orchestrator | Wednesday 01 April 2026 00:47:31 +0000 (0:00:00.133) 0:00:58.276 ******* 2026-04-01 00:47:36.145343 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:47:36.145349 | orchestrator | 2026-04-01 00:47:36.145354 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-04-01 00:47:36.145359 | orchestrator | Wednesday 01 April 2026 00:47:31 +0000 (0:00:00.506) 0:00:58.782 ******* 2026-04-01 00:47:36.145364 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:47:36.145369 | orchestrator | 2026-04-01 00:47:36.145375 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-04-01 00:47:36.145380 | orchestrator | Wednesday 01 April 2026 00:47:32 +0000 (0:00:00.480) 0:00:59.262 ******* 2026-04-01 00:47:36.145385 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:47:36.145390 | orchestrator | 2026-04-01 00:47:36.145395 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-04-01 00:47:36.145400 | orchestrator | Wednesday 01 April 2026 00:47:32 +0000 (0:00:00.493) 0:00:59.756 ******* 2026-04-01 00:47:36.145405 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:47:36.145411 | orchestrator | 2026-04-01 00:47:36.145416 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-04-01 00:47:36.145421 | orchestrator | Wednesday 01 April 2026 00:47:33 +0000 (0:00:00.391) 0:01:00.148 ******* 2026-04-01 00:47:36.145426 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:47:36.145431 | orchestrator | 2026-04-01 00:47:36.145436 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-04-01 00:47:36.145442 | orchestrator | Wednesday 01 April 2026 00:47:33 +0000 (0:00:00.132) 0:01:00.280 ******* 2026-04-01 00:47:36.145447 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:47:36.145452 | orchestrator | 2026-04-01 00:47:36.145458 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-04-01 00:47:36.145462 | orchestrator | Wednesday 01 April 2026 00:47:33 +0000 (0:00:00.114) 0:01:00.395 ******* 2026-04-01 00:47:36.145465 | orchestrator | ok: [testbed-node-5] => { 2026-04-01 00:47:36.145468 | orchestrator |  "vgs_report": { 2026-04-01 00:47:36.145471 | orchestrator |  "vg": [] 2026-04-01 00:47:36.145485 | orchestrator |  } 2026-04-01 00:47:36.145491 | orchestrator | } 2026-04-01 00:47:36.145495 | orchestrator | 2026-04-01 00:47:36.145501 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-04-01 00:47:36.145506 | orchestrator | Wednesday 01 April 2026 00:47:33 +0000 (0:00:00.159) 0:01:00.554 ******* 2026-04-01 00:47:36.145511 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:47:36.145516 | orchestrator | 2026-04-01 00:47:36.145521 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-04-01 00:47:36.145527 | orchestrator | Wednesday 01 April 2026 00:47:33 +0000 (0:00:00.131) 0:01:00.685 ******* 2026-04-01 00:47:36.145584 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:47:36.145597 | orchestrator | 2026-04-01 00:47:36.145603 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-04-01 00:47:36.145608 | orchestrator | Wednesday 01 April 2026 00:47:33 +0000 (0:00:00.135) 0:01:00.821 ******* 2026-04-01 00:47:36.145614 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:47:36.145619 | orchestrator | 2026-04-01 00:47:36.145625 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-04-01 00:47:36.145630 | orchestrator | Wednesday 01 April 2026 00:47:34 +0000 (0:00:00.129) 0:01:00.950 ******* 2026-04-01 00:47:36.145636 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:47:36.145641 | orchestrator | 2026-04-01 00:47:36.145646 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-04-01 00:47:36.145652 | orchestrator | Wednesday 01 April 2026 00:47:34 +0000 (0:00:00.128) 0:01:01.078 ******* 2026-04-01 00:47:36.145657 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:47:36.145662 | orchestrator | 2026-04-01 00:47:36.145668 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-04-01 00:47:36.145673 | orchestrator | Wednesday 01 April 2026 00:47:34 +0000 (0:00:00.185) 0:01:01.264 ******* 2026-04-01 00:47:36.145678 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:47:36.145683 | orchestrator | 2026-04-01 00:47:36.145688 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-04-01 00:47:36.145699 | orchestrator | Wednesday 01 April 2026 00:47:34 +0000 (0:00:00.129) 0:01:01.394 ******* 2026-04-01 00:47:36.145705 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:47:36.145710 | orchestrator | 2026-04-01 00:47:36.145716 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-04-01 00:47:36.145721 | orchestrator | Wednesday 01 April 2026 00:47:34 +0000 (0:00:00.133) 0:01:01.527 ******* 2026-04-01 00:47:36.145727 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:47:36.145732 | orchestrator | 2026-04-01 00:47:36.145738 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-04-01 00:47:36.145744 | orchestrator | Wednesday 01 April 2026 00:47:34 +0000 (0:00:00.134) 0:01:01.662 ******* 2026-04-01 00:47:36.145749 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:47:36.145756 | orchestrator | 2026-04-01 00:47:36.145760 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-04-01 00:47:36.145764 | orchestrator | Wednesday 01 April 2026 00:47:35 +0000 (0:00:00.369) 0:01:02.032 ******* 2026-04-01 00:47:36.145768 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:47:36.145771 | orchestrator | 2026-04-01 00:47:36.145775 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-04-01 00:47:36.145779 | orchestrator | Wednesday 01 April 2026 00:47:35 +0000 (0:00:00.143) 0:01:02.176 ******* 2026-04-01 00:47:36.145782 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:47:36.145786 | orchestrator | 2026-04-01 00:47:36.145790 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-04-01 00:47:36.145793 | orchestrator | Wednesday 01 April 2026 00:47:35 +0000 (0:00:00.142) 0:01:02.319 ******* 2026-04-01 00:47:36.145797 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:47:36.145801 | orchestrator | 2026-04-01 00:47:36.145804 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-04-01 00:47:36.145808 | orchestrator | Wednesday 01 April 2026 00:47:35 +0000 (0:00:00.138) 0:01:02.457 ******* 2026-04-01 00:47:36.145812 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:47:36.145815 | orchestrator | 2026-04-01 00:47:36.145819 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-04-01 00:47:36.145823 | orchestrator | Wednesday 01 April 2026 00:47:35 +0000 (0:00:00.142) 0:01:02.600 ******* 2026-04-01 00:47:36.145826 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:47:36.145830 | orchestrator | 2026-04-01 00:47:36.145834 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-04-01 00:47:36.145838 | orchestrator | Wednesday 01 April 2026 00:47:35 +0000 (0:00:00.130) 0:01:02.730 ******* 2026-04-01 00:47:36.145841 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c7c10550-c1bc-5fe3-90d5-7d7a9167f51f', 'data_vg': 'ceph-c7c10550-c1bc-5fe3-90d5-7d7a9167f51f'})  2026-04-01 00:47:36.145845 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d3162267-511d-5f73-a1c4-60a47e452e5f', 'data_vg': 'ceph-d3162267-511d-5f73-a1c4-60a47e452e5f'})  2026-04-01 00:47:36.145849 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:47:36.145853 | orchestrator | 2026-04-01 00:47:36.145856 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-04-01 00:47:36.145860 | orchestrator | Wednesday 01 April 2026 00:47:35 +0000 (0:00:00.145) 0:01:02.876 ******* 2026-04-01 00:47:36.145863 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c7c10550-c1bc-5fe3-90d5-7d7a9167f51f', 'data_vg': 'ceph-c7c10550-c1bc-5fe3-90d5-7d7a9167f51f'})  2026-04-01 00:47:36.145866 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d3162267-511d-5f73-a1c4-60a47e452e5f', 'data_vg': 'ceph-d3162267-511d-5f73-a1c4-60a47e452e5f'})  2026-04-01 00:47:36.145874 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:47:36.145877 | orchestrator | 2026-04-01 00:47:36.145881 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-04-01 00:47:36.145884 | orchestrator | Wednesday 01 April 2026 00:47:36 +0000 (0:00:00.150) 0:01:03.026 ******* 2026-04-01 00:47:36.145894 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c7c10550-c1bc-5fe3-90d5-7d7a9167f51f', 'data_vg': 'ceph-c7c10550-c1bc-5fe3-90d5-7d7a9167f51f'})  2026-04-01 00:47:39.061734 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d3162267-511d-5f73-a1c4-60a47e452e5f', 'data_vg': 'ceph-d3162267-511d-5f73-a1c4-60a47e452e5f'})  2026-04-01 00:47:39.061794 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:47:39.061805 | orchestrator | 2026-04-01 00:47:39.061814 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-04-01 00:47:39.061824 | orchestrator | Wednesday 01 April 2026 00:47:36 +0000 (0:00:00.148) 0:01:03.175 ******* 2026-04-01 00:47:39.061832 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c7c10550-c1bc-5fe3-90d5-7d7a9167f51f', 'data_vg': 'ceph-c7c10550-c1bc-5fe3-90d5-7d7a9167f51f'})  2026-04-01 00:47:39.061852 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d3162267-511d-5f73-a1c4-60a47e452e5f', 'data_vg': 'ceph-d3162267-511d-5f73-a1c4-60a47e452e5f'})  2026-04-01 00:47:39.061861 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:47:39.061869 | orchestrator | 2026-04-01 00:47:39.061878 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-04-01 00:47:39.061887 | orchestrator | Wednesday 01 April 2026 00:47:36 +0000 (0:00:00.133) 0:01:03.309 ******* 2026-04-01 00:47:39.061895 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c7c10550-c1bc-5fe3-90d5-7d7a9167f51f', 'data_vg': 'ceph-c7c10550-c1bc-5fe3-90d5-7d7a9167f51f'})  2026-04-01 00:47:39.061904 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d3162267-511d-5f73-a1c4-60a47e452e5f', 'data_vg': 'ceph-d3162267-511d-5f73-a1c4-60a47e452e5f'})  2026-04-01 00:47:39.061913 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:47:39.061922 | orchestrator | 2026-04-01 00:47:39.061930 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-04-01 00:47:39.061938 | orchestrator | Wednesday 01 April 2026 00:47:36 +0000 (0:00:00.164) 0:01:03.474 ******* 2026-04-01 00:47:39.061947 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c7c10550-c1bc-5fe3-90d5-7d7a9167f51f', 'data_vg': 'ceph-c7c10550-c1bc-5fe3-90d5-7d7a9167f51f'})  2026-04-01 00:47:39.061956 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d3162267-511d-5f73-a1c4-60a47e452e5f', 'data_vg': 'ceph-d3162267-511d-5f73-a1c4-60a47e452e5f'})  2026-04-01 00:47:39.061965 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:47:39.061973 | orchestrator | 2026-04-01 00:47:39.061982 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-04-01 00:47:39.061992 | orchestrator | Wednesday 01 April 2026 00:47:36 +0000 (0:00:00.143) 0:01:03.617 ******* 2026-04-01 00:47:39.061998 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c7c10550-c1bc-5fe3-90d5-7d7a9167f51f', 'data_vg': 'ceph-c7c10550-c1bc-5fe3-90d5-7d7a9167f51f'})  2026-04-01 00:47:39.062003 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d3162267-511d-5f73-a1c4-60a47e452e5f', 'data_vg': 'ceph-d3162267-511d-5f73-a1c4-60a47e452e5f'})  2026-04-01 00:47:39.062008 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:47:39.062057 | orchestrator | 2026-04-01 00:47:39.062063 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-04-01 00:47:39.062068 | orchestrator | Wednesday 01 April 2026 00:47:37 +0000 (0:00:00.361) 0:01:03.978 ******* 2026-04-01 00:47:39.062073 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c7c10550-c1bc-5fe3-90d5-7d7a9167f51f', 'data_vg': 'ceph-c7c10550-c1bc-5fe3-90d5-7d7a9167f51f'})  2026-04-01 00:47:39.062079 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d3162267-511d-5f73-a1c4-60a47e452e5f', 'data_vg': 'ceph-d3162267-511d-5f73-a1c4-60a47e452e5f'})  2026-04-01 00:47:39.062084 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:47:39.062089 | orchestrator | 2026-04-01 00:47:39.062095 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-04-01 00:47:39.062114 | orchestrator | Wednesday 01 April 2026 00:47:37 +0000 (0:00:00.177) 0:01:04.156 ******* 2026-04-01 00:47:39.062120 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:47:39.062126 | orchestrator | 2026-04-01 00:47:39.062131 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-04-01 00:47:39.062136 | orchestrator | Wednesday 01 April 2026 00:47:37 +0000 (0:00:00.481) 0:01:04.637 ******* 2026-04-01 00:47:39.062141 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:47:39.062146 | orchestrator | 2026-04-01 00:47:39.062152 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-04-01 00:47:39.062157 | orchestrator | Wednesday 01 April 2026 00:47:38 +0000 (0:00:00.457) 0:01:05.095 ******* 2026-04-01 00:47:39.062162 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:47:39.062167 | orchestrator | 2026-04-01 00:47:39.062172 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-04-01 00:47:39.062177 | orchestrator | Wednesday 01 April 2026 00:47:38 +0000 (0:00:00.158) 0:01:05.254 ******* 2026-04-01 00:47:39.062183 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-c7c10550-c1bc-5fe3-90d5-7d7a9167f51f', 'vg_name': 'ceph-c7c10550-c1bc-5fe3-90d5-7d7a9167f51f'}) 2026-04-01 00:47:39.062189 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-d3162267-511d-5f73-a1c4-60a47e452e5f', 'vg_name': 'ceph-d3162267-511d-5f73-a1c4-60a47e452e5f'}) 2026-04-01 00:47:39.062195 | orchestrator | 2026-04-01 00:47:39.062200 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-04-01 00:47:39.062205 | orchestrator | Wednesday 01 April 2026 00:47:38 +0000 (0:00:00.168) 0:01:05.422 ******* 2026-04-01 00:47:39.062222 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c7c10550-c1bc-5fe3-90d5-7d7a9167f51f', 'data_vg': 'ceph-c7c10550-c1bc-5fe3-90d5-7d7a9167f51f'})  2026-04-01 00:47:39.062227 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d3162267-511d-5f73-a1c4-60a47e452e5f', 'data_vg': 'ceph-d3162267-511d-5f73-a1c4-60a47e452e5f'})  2026-04-01 00:47:39.062233 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:47:39.062238 | orchestrator | 2026-04-01 00:47:39.062243 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-04-01 00:47:39.062248 | orchestrator | Wednesday 01 April 2026 00:47:38 +0000 (0:00:00.150) 0:01:05.573 ******* 2026-04-01 00:47:39.062253 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c7c10550-c1bc-5fe3-90d5-7d7a9167f51f', 'data_vg': 'ceph-c7c10550-c1bc-5fe3-90d5-7d7a9167f51f'})  2026-04-01 00:47:39.062263 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d3162267-511d-5f73-a1c4-60a47e452e5f', 'data_vg': 'ceph-d3162267-511d-5f73-a1c4-60a47e452e5f'})  2026-04-01 00:47:39.062268 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:47:39.062273 | orchestrator | 2026-04-01 00:47:39.062278 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-04-01 00:47:39.062283 | orchestrator | Wednesday 01 April 2026 00:47:38 +0000 (0:00:00.144) 0:01:05.717 ******* 2026-04-01 00:47:39.062288 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c7c10550-c1bc-5fe3-90d5-7d7a9167f51f', 'data_vg': 'ceph-c7c10550-c1bc-5fe3-90d5-7d7a9167f51f'})  2026-04-01 00:47:39.062294 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d3162267-511d-5f73-a1c4-60a47e452e5f', 'data_vg': 'ceph-d3162267-511d-5f73-a1c4-60a47e452e5f'})  2026-04-01 00:47:39.062299 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:47:39.062304 | orchestrator | 2026-04-01 00:47:39.062310 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-04-01 00:47:39.062317 | orchestrator | Wednesday 01 April 2026 00:47:38 +0000 (0:00:00.154) 0:01:05.872 ******* 2026-04-01 00:47:39.062323 | orchestrator | ok: [testbed-node-5] => { 2026-04-01 00:47:39.062329 | orchestrator |  "lvm_report": { 2026-04-01 00:47:39.062335 | orchestrator |  "lv": [ 2026-04-01 00:47:39.062341 | orchestrator |  { 2026-04-01 00:47:39.062347 | orchestrator |  "lv_name": "osd-block-c7c10550-c1bc-5fe3-90d5-7d7a9167f51f", 2026-04-01 00:47:39.062358 | orchestrator |  "vg_name": "ceph-c7c10550-c1bc-5fe3-90d5-7d7a9167f51f" 2026-04-01 00:47:39.062366 | orchestrator |  }, 2026-04-01 00:47:39.062376 | orchestrator |  { 2026-04-01 00:47:39.062385 | orchestrator |  "lv_name": "osd-block-d3162267-511d-5f73-a1c4-60a47e452e5f", 2026-04-01 00:47:39.062394 | orchestrator |  "vg_name": "ceph-d3162267-511d-5f73-a1c4-60a47e452e5f" 2026-04-01 00:47:39.062403 | orchestrator |  } 2026-04-01 00:47:39.062412 | orchestrator |  ], 2026-04-01 00:47:39.062421 | orchestrator |  "pv": [ 2026-04-01 00:47:39.062431 | orchestrator |  { 2026-04-01 00:47:39.062441 | orchestrator |  "pv_name": "/dev/sdb", 2026-04-01 00:47:39.062451 | orchestrator |  "vg_name": "ceph-c7c10550-c1bc-5fe3-90d5-7d7a9167f51f" 2026-04-01 00:47:39.062460 | orchestrator |  }, 2026-04-01 00:47:39.062467 | orchestrator |  { 2026-04-01 00:47:39.062473 | orchestrator |  "pv_name": "/dev/sdc", 2026-04-01 00:47:39.062480 | orchestrator |  "vg_name": "ceph-d3162267-511d-5f73-a1c4-60a47e452e5f" 2026-04-01 00:47:39.062486 | orchestrator |  } 2026-04-01 00:47:39.062492 | orchestrator |  ] 2026-04-01 00:47:39.062498 | orchestrator |  } 2026-04-01 00:47:39.062504 | orchestrator | } 2026-04-01 00:47:39.062510 | orchestrator | 2026-04-01 00:47:39.062515 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 00:47:39.062522 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-04-01 00:47:39.062549 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-04-01 00:47:39.062557 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-04-01 00:47:39.062563 | orchestrator | 2026-04-01 00:47:39.062568 | orchestrator | 2026-04-01 00:47:39.062574 | orchestrator | 2026-04-01 00:47:39.062580 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 00:47:39.062586 | orchestrator | Wednesday 01 April 2026 00:47:39 +0000 (0:00:00.126) 0:01:05.999 ******* 2026-04-01 00:47:39.062595 | orchestrator | =============================================================================== 2026-04-01 00:47:39.062608 | orchestrator | Create block VGs -------------------------------------------------------- 5.75s 2026-04-01 00:47:39.062619 | orchestrator | Create block LVs -------------------------------------------------------- 4.07s 2026-04-01 00:47:39.062627 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.82s 2026-04-01 00:47:39.062637 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.61s 2026-04-01 00:47:39.062646 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.54s 2026-04-01 00:47:39.062655 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.53s 2026-04-01 00:47:39.062664 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.49s 2026-04-01 00:47:39.062673 | orchestrator | Add known partitions to the list of available block devices ------------- 1.30s 2026-04-01 00:47:39.062690 | orchestrator | Add known links to the list of available block devices ------------------ 1.18s 2026-04-01 00:47:39.476771 | orchestrator | Add known partitions to the list of available block devices ------------- 0.87s 2026-04-01 00:47:39.476886 | orchestrator | Print LVM report data --------------------------------------------------- 0.81s 2026-04-01 00:47:39.476893 | orchestrator | Add known partitions to the list of available block devices ------------- 0.77s 2026-04-01 00:47:39.476897 | orchestrator | Create dict of block VGs -> PVs from ceph_osd_devices ------------------- 0.73s 2026-04-01 00:47:39.476901 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.71s 2026-04-01 00:47:39.476924 | orchestrator | Combine JSON from _db/wal/db_wal_vgs_cmd_output ------------------------- 0.66s 2026-04-01 00:47:39.476928 | orchestrator | Count OSDs put on ceph_db_devices defined in lvm_volumes ---------------- 0.65s 2026-04-01 00:47:39.476932 | orchestrator | Get initial list of available block devices ----------------------------- 0.61s 2026-04-01 00:47:39.476945 | orchestrator | Create DB LVs for ceph_db_wal_devices ----------------------------------- 0.61s 2026-04-01 00:47:39.476949 | orchestrator | Print size needed for WAL LVs on ceph_db_wal_devices -------------------- 0.61s 2026-04-01 00:47:39.476953 | orchestrator | Add known links to the list of available block devices ------------------ 0.59s 2026-04-01 00:47:51.090444 | orchestrator | 2026-04-01 00:47:51 | INFO  | Prepare task for execution of facts. 2026-04-01 00:47:51.176703 | orchestrator | 2026-04-01 00:47:51 | INFO  | Task 695a5023-6c55-414d-a938-79899d879d8a (facts) was prepared for execution. 2026-04-01 00:47:51.176888 | orchestrator | 2026-04-01 00:47:51 | INFO  | It takes a moment until task 695a5023-6c55-414d-a938-79899d879d8a (facts) has been started and output is visible here. 2026-04-01 00:48:01.774599 | orchestrator | 2026-04-01 00:48:01.774661 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-04-01 00:48:01.774669 | orchestrator | 2026-04-01 00:48:01.774676 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-01 00:48:01.774681 | orchestrator | Wednesday 01 April 2026 00:47:54 +0000 (0:00:00.309) 0:00:00.309 ******* 2026-04-01 00:48:01.774686 | orchestrator | ok: [testbed-manager] 2026-04-01 00:48:01.774693 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:48:01.774699 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:48:01.774704 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:48:01.774710 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:48:01.774715 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:48:01.774720 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:48:01.774725 | orchestrator | 2026-04-01 00:48:01.774731 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-01 00:48:01.774737 | orchestrator | Wednesday 01 April 2026 00:47:55 +0000 (0:00:01.247) 0:00:01.557 ******* 2026-04-01 00:48:01.774742 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:48:01.774748 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:48:01.774754 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:48:01.774759 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:48:01.774765 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:48:01.774770 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:48:01.774775 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:48:01.774781 | orchestrator | 2026-04-01 00:48:01.774786 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-01 00:48:01.774792 | orchestrator | 2026-04-01 00:48:01.774797 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-01 00:48:01.774803 | orchestrator | Wednesday 01 April 2026 00:47:56 +0000 (0:00:01.286) 0:00:02.844 ******* 2026-04-01 00:48:01.774808 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:48:01.774813 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:48:01.774818 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:48:01.774824 | orchestrator | ok: [testbed-manager] 2026-04-01 00:48:01.774829 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:48:01.774834 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:48:01.774839 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:48:01.774845 | orchestrator | 2026-04-01 00:48:01.774850 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-01 00:48:01.774855 | orchestrator | 2026-04-01 00:48:01.774860 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-01 00:48:01.774865 | orchestrator | Wednesday 01 April 2026 00:48:01 +0000 (0:00:04.258) 0:00:07.103 ******* 2026-04-01 00:48:01.774870 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:48:01.774876 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:48:01.774881 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:48:01.774902 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:48:01.774908 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:48:01.774913 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:48:01.774919 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:48:01.774924 | orchestrator | 2026-04-01 00:48:01.774929 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 00:48:01.774935 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-01 00:48:01.774940 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-01 00:48:01.774946 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-01 00:48:01.774951 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-01 00:48:01.774956 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-01 00:48:01.774961 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-01 00:48:01.774967 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-01 00:48:01.774973 | orchestrator | 2026-04-01 00:48:01.774978 | orchestrator | 2026-04-01 00:48:01.774984 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 00:48:01.774989 | orchestrator | Wednesday 01 April 2026 00:48:01 +0000 (0:00:00.508) 0:00:07.612 ******* 2026-04-01 00:48:01.774994 | orchestrator | =============================================================================== 2026-04-01 00:48:01.775000 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.26s 2026-04-01 00:48:01.775005 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.29s 2026-04-01 00:48:01.775019 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.25s 2026-04-01 00:48:01.775024 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.51s 2026-04-01 00:48:13.283510 | orchestrator | 2026-04-01 00:48:13 | INFO  | Prepare task for execution of frr. 2026-04-01 00:48:13.354865 | orchestrator | 2026-04-01 00:48:13 | INFO  | Task 199225ae-87e4-48fb-92c2-cf2dc9e9acbf (frr) was prepared for execution. 2026-04-01 00:48:13.354919 | orchestrator | 2026-04-01 00:48:13 | INFO  | It takes a moment until task 199225ae-87e4-48fb-92c2-cf2dc9e9acbf (frr) has been started and output is visible here. 2026-04-01 00:48:36.371317 | orchestrator | 2026-04-01 00:48:36.371386 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-04-01 00:48:36.371393 | orchestrator | 2026-04-01 00:48:36.371400 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-04-01 00:48:36.371405 | orchestrator | Wednesday 01 April 2026 00:48:16 +0000 (0:00:00.278) 0:00:00.278 ******* 2026-04-01 00:48:36.371411 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-04-01 00:48:36.371417 | orchestrator | 2026-04-01 00:48:36.371427 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-04-01 00:48:36.371433 | orchestrator | Wednesday 01 April 2026 00:48:17 +0000 (0:00:00.198) 0:00:00.477 ******* 2026-04-01 00:48:36.371438 | orchestrator | changed: [testbed-manager] 2026-04-01 00:48:36.371445 | orchestrator | 2026-04-01 00:48:36.371462 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-04-01 00:48:36.371468 | orchestrator | Wednesday 01 April 2026 00:48:18 +0000 (0:00:01.408) 0:00:01.886 ******* 2026-04-01 00:48:36.371485 | orchestrator | changed: [testbed-manager] 2026-04-01 00:48:36.371490 | orchestrator | 2026-04-01 00:48:36.371495 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-04-01 00:48:36.371500 | orchestrator | Wednesday 01 April 2026 00:48:27 +0000 (0:00:08.734) 0:00:10.620 ******* 2026-04-01 00:48:36.371506 | orchestrator | ok: [testbed-manager] 2026-04-01 00:48:36.371512 | orchestrator | 2026-04-01 00:48:36.371517 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-04-01 00:48:36.371523 | orchestrator | Wednesday 01 April 2026 00:48:28 +0000 (0:00:00.933) 0:00:11.553 ******* 2026-04-01 00:48:36.371528 | orchestrator | changed: [testbed-manager] 2026-04-01 00:48:36.371533 | orchestrator | 2026-04-01 00:48:36.371538 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-04-01 00:48:36.371543 | orchestrator | Wednesday 01 April 2026 00:48:29 +0000 (0:00:00.905) 0:00:12.459 ******* 2026-04-01 00:48:36.371548 | orchestrator | ok: [testbed-manager] 2026-04-01 00:48:36.371553 | orchestrator | 2026-04-01 00:48:36.371559 | orchestrator | TASK [osism.services.frr : Write frr_config_template to temporary file] ******** 2026-04-01 00:48:36.371564 | orchestrator | Wednesday 01 April 2026 00:48:30 +0000 (0:00:01.072) 0:00:13.532 ******* 2026-04-01 00:48:36.371569 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:48:36.371574 | orchestrator | 2026-04-01 00:48:36.371579 | orchestrator | TASK [osism.services.frr : Render frr.conf from frr_config_template variable] *** 2026-04-01 00:48:36.371584 | orchestrator | Wednesday 01 April 2026 00:48:30 +0000 (0:00:00.146) 0:00:13.678 ******* 2026-04-01 00:48:36.371589 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:48:36.371594 | orchestrator | 2026-04-01 00:48:36.371600 | orchestrator | TASK [osism.services.frr : Remove temporary frr_config_template file] ********** 2026-04-01 00:48:36.371605 | orchestrator | Wednesday 01 April 2026 00:48:30 +0000 (0:00:00.236) 0:00:13.915 ******* 2026-04-01 00:48:36.371610 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:48:36.371616 | orchestrator | 2026-04-01 00:48:36.371621 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-04-01 00:48:36.371626 | orchestrator | Wednesday 01 April 2026 00:48:30 +0000 (0:00:00.140) 0:00:14.056 ******* 2026-04-01 00:48:36.371631 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:48:36.371637 | orchestrator | 2026-04-01 00:48:36.371642 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-04-01 00:48:36.371647 | orchestrator | Wednesday 01 April 2026 00:48:30 +0000 (0:00:00.127) 0:00:14.183 ******* 2026-04-01 00:48:36.371653 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:48:36.371658 | orchestrator | 2026-04-01 00:48:36.371663 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-04-01 00:48:36.371668 | orchestrator | Wednesday 01 April 2026 00:48:31 +0000 (0:00:00.132) 0:00:14.316 ******* 2026-04-01 00:48:36.371673 | orchestrator | changed: [testbed-manager] 2026-04-01 00:48:36.371678 | orchestrator | 2026-04-01 00:48:36.371683 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-04-01 00:48:36.371689 | orchestrator | Wednesday 01 April 2026 00:48:31 +0000 (0:00:00.832) 0:00:15.148 ******* 2026-04-01 00:48:36.371694 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-04-01 00:48:36.371699 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-04-01 00:48:36.371705 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-04-01 00:48:36.371710 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-04-01 00:48:36.371715 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-04-01 00:48:36.371721 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-04-01 00:48:36.371726 | orchestrator | 2026-04-01 00:48:36.371731 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-04-01 00:48:36.371740 | orchestrator | Wednesday 01 April 2026 00:48:33 +0000 (0:00:01.985) 0:00:17.134 ******* 2026-04-01 00:48:36.371745 | orchestrator | ok: [testbed-manager] 2026-04-01 00:48:36.371750 | orchestrator | 2026-04-01 00:48:36.371755 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-04-01 00:48:36.371761 | orchestrator | Wednesday 01 April 2026 00:48:34 +0000 (0:00:01.053) 0:00:18.188 ******* 2026-04-01 00:48:36.371766 | orchestrator | changed: [testbed-manager] 2026-04-01 00:48:36.371771 | orchestrator | 2026-04-01 00:48:36.371776 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 00:48:36.371782 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-01 00:48:36.371787 | orchestrator | 2026-04-01 00:48:36.371792 | orchestrator | 2026-04-01 00:48:36.371806 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 00:48:36.371812 | orchestrator | Wednesday 01 April 2026 00:48:36 +0000 (0:00:01.263) 0:00:19.452 ******* 2026-04-01 00:48:36.371817 | orchestrator | =============================================================================== 2026-04-01 00:48:36.371822 | orchestrator | osism.services.frr : Install frr package -------------------------------- 8.73s 2026-04-01 00:48:36.371827 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 1.99s 2026-04-01 00:48:36.371832 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.41s 2026-04-01 00:48:36.371836 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.26s 2026-04-01 00:48:36.371852 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.07s 2026-04-01 00:48:36.371857 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.05s 2026-04-01 00:48:36.371863 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 0.93s 2026-04-01 00:48:36.371868 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.91s 2026-04-01 00:48:36.371873 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 0.83s 2026-04-01 00:48:36.371878 | orchestrator | osism.services.frr : Render frr.conf from frr_config_template variable --- 0.24s 2026-04-01 00:48:36.371883 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.20s 2026-04-01 00:48:36.371889 | orchestrator | osism.services.frr : Write frr_config_template to temporary file -------- 0.15s 2026-04-01 00:48:36.371894 | orchestrator | osism.services.frr : Remove temporary frr_config_template file ---------- 0.14s 2026-04-01 00:48:36.371899 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.13s 2026-04-01 00:48:36.371905 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.13s 2026-04-01 00:48:36.492928 | orchestrator | 2026-04-01 00:48:36.495881 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Wed Apr 1 00:48:36 UTC 2026 2026-04-01 00:48:36.495926 | orchestrator | 2026-04-01 00:48:37.517043 | orchestrator | 2026-04-01 00:48:37 | INFO  | Collection nutshell is prepared for execution 2026-04-01 00:48:37.617965 | orchestrator | 2026-04-01 00:48:37 | INFO  | A [0] - dotfiles 2026-04-01 00:48:47.716261 | orchestrator | 2026-04-01 00:48:47 | INFO  | A [0] - homer 2026-04-01 00:48:47.716380 | orchestrator | 2026-04-01 00:48:47 | INFO  | A [0] - netdata 2026-04-01 00:48:47.716393 | orchestrator | 2026-04-01 00:48:47 | INFO  | A [0] - openstackclient 2026-04-01 00:48:47.716401 | orchestrator | 2026-04-01 00:48:47 | INFO  | A [0] - phpmyadmin 2026-04-01 00:48:47.716407 | orchestrator | 2026-04-01 00:48:47 | INFO  | A [0] - common 2026-04-01 00:48:47.720627 | orchestrator | 2026-04-01 00:48:47 | INFO  | A [1] -- loadbalancer 2026-04-01 00:48:47.720890 | orchestrator | 2026-04-01 00:48:47 | INFO  | A [2] --- opensearch 2026-04-01 00:48:47.721426 | orchestrator | 2026-04-01 00:48:47 | INFO  | A [2] --- mariadb-ng 2026-04-01 00:48:47.722061 | orchestrator | 2026-04-01 00:48:47 | INFO  | A [3] ---- horizon 2026-04-01 00:48:47.722285 | orchestrator | 2026-04-01 00:48:47 | INFO  | A [3] ---- keystone 2026-04-01 00:48:47.722832 | orchestrator | 2026-04-01 00:48:47 | INFO  | A [4] ----- neutron 2026-04-01 00:48:47.723325 | orchestrator | 2026-04-01 00:48:47 | INFO  | A [5] ------ wait-for-nova 2026-04-01 00:48:47.723764 | orchestrator | 2026-04-01 00:48:47 | INFO  | A [6] ------- octavia 2026-04-01 00:48:47.725481 | orchestrator | 2026-04-01 00:48:47 | INFO  | A [4] ----- barbican 2026-04-01 00:48:47.725868 | orchestrator | 2026-04-01 00:48:47 | INFO  | A [4] ----- designate 2026-04-01 00:48:47.726063 | orchestrator | 2026-04-01 00:48:47 | INFO  | A [4] ----- ironic 2026-04-01 00:48:47.726535 | orchestrator | 2026-04-01 00:48:47 | INFO  | A [4] ----- placement 2026-04-01 00:48:47.726655 | orchestrator | 2026-04-01 00:48:47 | INFO  | A [4] ----- magnum 2026-04-01 00:48:47.729780 | orchestrator | 2026-04-01 00:48:47 | INFO  | A [1] -- openvswitch 2026-04-01 00:48:47.729834 | orchestrator | 2026-04-01 00:48:47 | INFO  | A [2] --- ovn 2026-04-01 00:48:47.729848 | orchestrator | 2026-04-01 00:48:47 | INFO  | A [1] -- memcached 2026-04-01 00:48:47.729858 | orchestrator | 2026-04-01 00:48:47 | INFO  | A [1] -- redis 2026-04-01 00:48:47.729868 | orchestrator | 2026-04-01 00:48:47 | INFO  | A [1] -- rabbitmq-ng 2026-04-01 00:48:47.729879 | orchestrator | 2026-04-01 00:48:47 | INFO  | A [0] - kubernetes 2026-04-01 00:48:47.732516 | orchestrator | 2026-04-01 00:48:47 | INFO  | A [1] -- kubeconfig 2026-04-01 00:48:47.732571 | orchestrator | 2026-04-01 00:48:47 | INFO  | A [1] -- copy-kubeconfig 2026-04-01 00:48:47.733090 | orchestrator | 2026-04-01 00:48:47 | INFO  | A [0] - ceph 2026-04-01 00:48:47.735152 | orchestrator | 2026-04-01 00:48:47 | INFO  | A [1] -- ceph-pools 2026-04-01 00:48:47.735551 | orchestrator | 2026-04-01 00:48:47 | INFO  | A [2] --- copy-ceph-keys 2026-04-01 00:48:47.735613 | orchestrator | 2026-04-01 00:48:47 | INFO  | A [3] ---- cephclient 2026-04-01 00:48:47.735829 | orchestrator | 2026-04-01 00:48:47 | INFO  | A [4] ----- ceph-bootstrap-dashboard 2026-04-01 00:48:47.736088 | orchestrator | 2026-04-01 00:48:47 | INFO  | A [4] ----- wait-for-keystone 2026-04-01 00:48:47.736500 | orchestrator | 2026-04-01 00:48:47 | INFO  | A [5] ------ kolla-ceph-rgw 2026-04-01 00:48:47.736657 | orchestrator | 2026-04-01 00:48:47 | INFO  | A [5] ------ glance 2026-04-01 00:48:47.736915 | orchestrator | 2026-04-01 00:48:47 | INFO  | A [5] ------ cinder 2026-04-01 00:48:47.737124 | orchestrator | 2026-04-01 00:48:47 | INFO  | A [5] ------ nova 2026-04-01 00:48:47.737708 | orchestrator | 2026-04-01 00:48:47 | INFO  | A [4] ----- prometheus 2026-04-01 00:48:47.737993 | orchestrator | 2026-04-01 00:48:47 | INFO  | A [5] ------ grafana 2026-04-01 00:48:47.926828 | orchestrator | 2026-04-01 00:48:47 | INFO  | All tasks of the collection nutshell are prepared for execution 2026-04-01 00:48:47.927422 | orchestrator | 2026-04-01 00:48:47 | INFO  | Tasks are running in the background 2026-04-01 00:48:49.518746 | orchestrator | 2026-04-01 00:48:49 | INFO  | No task IDs specified, wait for all currently running tasks 2026-04-01 00:48:51.705025 | orchestrator | 2026-04-01 00:48:51 | INFO  | Task d6fec851-5f63-4658-b798-1f08e8c72eb9 is in state STARTED 2026-04-01 00:48:51.705584 | orchestrator | 2026-04-01 00:48:51 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:48:51.706758 | orchestrator | 2026-04-01 00:48:51 | INFO  | Task ac0beb29-009e-4439-9e26-9484b1d8a9a9 is in state STARTED 2026-04-01 00:48:51.707769 | orchestrator | 2026-04-01 00:48:51 | INFO  | Task 933db249-6878-4db3-9d15-994592505608 is in state STARTED 2026-04-01 00:48:51.709046 | orchestrator | 2026-04-01 00:48:51 | INFO  | Task 6a54ddc3-81d7-4176-b18e-ce4bea0015d5 is in state STARTED 2026-04-01 00:48:51.709823 | orchestrator | 2026-04-01 00:48:51 | INFO  | Task 5e613a97-4ef4-4fb5-8151-42767d6c7116 is in state STARTED 2026-04-01 00:48:51.710983 | orchestrator | 2026-04-01 00:48:51 | INFO  | Task 3a2f16f6-b0cb-4f9b-96d2-57df48b698d1 is in state STARTED 2026-04-01 00:48:51.711023 | orchestrator | 2026-04-01 00:48:51 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:48:54.742674 | orchestrator | 2026-04-01 00:48:54 | INFO  | Task d6fec851-5f63-4658-b798-1f08e8c72eb9 is in state STARTED 2026-04-01 00:48:54.743016 | orchestrator | 2026-04-01 00:48:54 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:48:54.743748 | orchestrator | 2026-04-01 00:48:54 | INFO  | Task ac0beb29-009e-4439-9e26-9484b1d8a9a9 is in state STARTED 2026-04-01 00:48:54.745912 | orchestrator | 2026-04-01 00:48:54 | INFO  | Task 933db249-6878-4db3-9d15-994592505608 is in state STARTED 2026-04-01 00:48:54.746494 | orchestrator | 2026-04-01 00:48:54 | INFO  | Task 6a54ddc3-81d7-4176-b18e-ce4bea0015d5 is in state STARTED 2026-04-01 00:48:54.748771 | orchestrator | 2026-04-01 00:48:54 | INFO  | Task 5e613a97-4ef4-4fb5-8151-42767d6c7116 is in state STARTED 2026-04-01 00:48:54.749452 | orchestrator | 2026-04-01 00:48:54 | INFO  | Task 3a2f16f6-b0cb-4f9b-96d2-57df48b698d1 is in state STARTED 2026-04-01 00:48:54.749489 | orchestrator | 2026-04-01 00:48:54 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:48:57.895882 | orchestrator | 2026-04-01 00:48:57 | INFO  | Task d6fec851-5f63-4658-b798-1f08e8c72eb9 is in state STARTED 2026-04-01 00:48:57.895952 | orchestrator | 2026-04-01 00:48:57 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:48:57.895963 | orchestrator | 2026-04-01 00:48:57 | INFO  | Task ac0beb29-009e-4439-9e26-9484b1d8a9a9 is in state STARTED 2026-04-01 00:48:57.895971 | orchestrator | 2026-04-01 00:48:57 | INFO  | Task 933db249-6878-4db3-9d15-994592505608 is in state STARTED 2026-04-01 00:48:57.895980 | orchestrator | 2026-04-01 00:48:57 | INFO  | Task 6a54ddc3-81d7-4176-b18e-ce4bea0015d5 is in state STARTED 2026-04-01 00:48:57.895988 | orchestrator | 2026-04-01 00:48:57 | INFO  | Task 5e613a97-4ef4-4fb5-8151-42767d6c7116 is in state STARTED 2026-04-01 00:48:57.896010 | orchestrator | 2026-04-01 00:48:57 | INFO  | Task 3a2f16f6-b0cb-4f9b-96d2-57df48b698d1 is in state STARTED 2026-04-01 00:48:57.896019 | orchestrator | 2026-04-01 00:48:57 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:49:00.946625 | orchestrator | 2026-04-01 00:49:00 | INFO  | Task d6fec851-5f63-4658-b798-1f08e8c72eb9 is in state STARTED 2026-04-01 00:49:00.946712 | orchestrator | 2026-04-01 00:49:00 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:49:00.946721 | orchestrator | 2026-04-01 00:49:00 | INFO  | Task ac0beb29-009e-4439-9e26-9484b1d8a9a9 is in state STARTED 2026-04-01 00:49:00.946728 | orchestrator | 2026-04-01 00:49:00 | INFO  | Task 933db249-6878-4db3-9d15-994592505608 is in state STARTED 2026-04-01 00:49:00.946736 | orchestrator | 2026-04-01 00:49:00 | INFO  | Task 6a54ddc3-81d7-4176-b18e-ce4bea0015d5 is in state STARTED 2026-04-01 00:49:00.946743 | orchestrator | 2026-04-01 00:49:00 | INFO  | Task 5e613a97-4ef4-4fb5-8151-42767d6c7116 is in state STARTED 2026-04-01 00:49:00.946750 | orchestrator | 2026-04-01 00:49:00 | INFO  | Task 3a2f16f6-b0cb-4f9b-96d2-57df48b698d1 is in state STARTED 2026-04-01 00:49:00.946782 | orchestrator | 2026-04-01 00:49:00 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:49:04.010929 | orchestrator | 2026-04-01 00:49:04 | INFO  | Task d6fec851-5f63-4658-b798-1f08e8c72eb9 is in state STARTED 2026-04-01 00:49:04.011018 | orchestrator | 2026-04-01 00:49:04 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:49:04.011025 | orchestrator | 2026-04-01 00:49:04 | INFO  | Task ac0beb29-009e-4439-9e26-9484b1d8a9a9 is in state STARTED 2026-04-01 00:49:04.011491 | orchestrator | 2026-04-01 00:49:04 | INFO  | Task 933db249-6878-4db3-9d15-994592505608 is in state STARTED 2026-04-01 00:49:04.012138 | orchestrator | 2026-04-01 00:49:04 | INFO  | Task 6a54ddc3-81d7-4176-b18e-ce4bea0015d5 is in state STARTED 2026-04-01 00:49:04.012702 | orchestrator | 2026-04-01 00:49:04 | INFO  | Task 5e613a97-4ef4-4fb5-8151-42767d6c7116 is in state STARTED 2026-04-01 00:49:04.013235 | orchestrator | 2026-04-01 00:49:04 | INFO  | Task 3a2f16f6-b0cb-4f9b-96d2-57df48b698d1 is in state STARTED 2026-04-01 00:49:04.013276 | orchestrator | 2026-04-01 00:49:04 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:49:07.084956 | orchestrator | 2026-04-01 00:49:07 | INFO  | Task d6fec851-5f63-4658-b798-1f08e8c72eb9 is in state STARTED 2026-04-01 00:49:07.085584 | orchestrator | 2026-04-01 00:49:07 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:49:07.088194 | orchestrator | 2026-04-01 00:49:07 | INFO  | Task ac0beb29-009e-4439-9e26-9484b1d8a9a9 is in state STARTED 2026-04-01 00:49:07.088657 | orchestrator | 2026-04-01 00:49:07 | INFO  | Task 933db249-6878-4db3-9d15-994592505608 is in state STARTED 2026-04-01 00:49:07.091008 | orchestrator | 2026-04-01 00:49:07 | INFO  | Task 6a54ddc3-81d7-4176-b18e-ce4bea0015d5 is in state STARTED 2026-04-01 00:49:07.092095 | orchestrator | 2026-04-01 00:49:07 | INFO  | Task 5e613a97-4ef4-4fb5-8151-42767d6c7116 is in state STARTED 2026-04-01 00:49:07.093868 | orchestrator | 2026-04-01 00:49:07 | INFO  | Task 3a2f16f6-b0cb-4f9b-96d2-57df48b698d1 is in state STARTED 2026-04-01 00:49:07.093930 | orchestrator | 2026-04-01 00:49:07 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:49:10.285345 | orchestrator | 2026-04-01 00:49:10.285435 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2026-04-01 00:49:10.285442 | orchestrator | 2026-04-01 00:49:10.285447 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2026-04-01 00:49:10.285452 | orchestrator | Wednesday 01 April 2026 00:48:57 +0000 (0:00:00.868) 0:00:00.868 ******* 2026-04-01 00:49:10.285457 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:49:10.285462 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:49:10.285466 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:49:10.285470 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:49:10.285474 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:49:10.285478 | orchestrator | changed: [testbed-manager] 2026-04-01 00:49:10.285482 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:49:10.285486 | orchestrator | 2026-04-01 00:49:10.285490 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2026-04-01 00:49:10.285494 | orchestrator | Wednesday 01 April 2026 00:49:01 +0000 (0:00:04.415) 0:00:05.284 ******* 2026-04-01 00:49:10.285499 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-04-01 00:49:10.285503 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-04-01 00:49:10.285507 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-04-01 00:49:10.285511 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-04-01 00:49:10.285515 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-04-01 00:49:10.285519 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-04-01 00:49:10.285540 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-04-01 00:49:10.285544 | orchestrator | 2026-04-01 00:49:10.285548 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2026-04-01 00:49:10.285557 | orchestrator | Wednesday 01 April 2026 00:49:03 +0000 (0:00:01.110) 0:00:06.395 ******* 2026-04-01 00:49:10.285564 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-01 00:49:02.469541', 'end': '2026-04-01 00:49:02.476965', 'delta': '0:00:00.007424', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-01 00:49:10.285570 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-01 00:49:02.519119', 'end': '2026-04-01 00:49:02.525492', 'delta': '0:00:00.006373', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-01 00:49:10.285574 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-01 00:49:02.800976', 'end': '2026-04-01 00:49:02.809266', 'delta': '0:00:00.008290', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-01 00:49:10.285595 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-01 00:49:02.522400', 'end': '2026-04-01 00:49:02.531662', 'delta': '0:00:00.009262', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-01 00:49:10.285602 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-01 00:49:02.585890', 'end': '2026-04-01 00:49:02.594286', 'delta': '0:00:00.008396', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-01 00:49:10.285614 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-01 00:49:02.752889', 'end': '2026-04-01 00:49:02.761340', 'delta': '0:00:00.008451', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-01 00:49:10.285618 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-01 00:49:02.793675', 'end': '2026-04-01 00:49:02.801898', 'delta': '0:00:00.008223', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-01 00:49:10.285622 | orchestrator | 2026-04-01 00:49:10.285626 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2026-04-01 00:49:10.285630 | orchestrator | Wednesday 01 April 2026 00:49:05 +0000 (0:00:02.827) 0:00:09.222 ******* 2026-04-01 00:49:10.285634 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-04-01 00:49:10.285638 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-04-01 00:49:10.285642 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-04-01 00:49:10.285646 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-04-01 00:49:10.285649 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-04-01 00:49:10.285653 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-04-01 00:49:10.285657 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-04-01 00:49:10.285661 | orchestrator | 2026-04-01 00:49:10.285665 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2026-04-01 00:49:10.285669 | orchestrator | Wednesday 01 April 2026 00:49:07 +0000 (0:00:01.503) 0:00:10.725 ******* 2026-04-01 00:49:10.285673 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2026-04-01 00:49:10.285676 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2026-04-01 00:49:10.285680 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2026-04-01 00:49:10.285684 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2026-04-01 00:49:10.285688 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2026-04-01 00:49:10.285692 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2026-04-01 00:49:10.285696 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2026-04-01 00:49:10.285700 | orchestrator | 2026-04-01 00:49:10.285703 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 00:49:10.285715 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 00:49:10.285721 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 00:49:10.285725 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 00:49:10.285729 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 00:49:10.285733 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 00:49:10.285737 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 00:49:10.285741 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 00:49:10.285745 | orchestrator | 2026-04-01 00:49:10.285749 | orchestrator | 2026-04-01 00:49:10.285753 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 00:49:10.285757 | orchestrator | Wednesday 01 April 2026 00:49:09 +0000 (0:00:02.017) 0:00:12.743 ******* 2026-04-01 00:49:10.285761 | orchestrator | =============================================================================== 2026-04-01 00:49:10.285765 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 4.42s 2026-04-01 00:49:10.285769 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 2.83s 2026-04-01 00:49:10.285773 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 2.02s 2026-04-01 00:49:10.285777 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 1.50s 2026-04-01 00:49:10.285781 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 1.11s 2026-04-01 00:49:10.285785 | orchestrator | 2026-04-01 00:49:10 | INFO  | Task d6fec851-5f63-4658-b798-1f08e8c72eb9 is in state SUCCESS 2026-04-01 00:49:10.285789 | orchestrator | 2026-04-01 00:49:10 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:49:10.285793 | orchestrator | 2026-04-01 00:49:10 | INFO  | Task ac0beb29-009e-4439-9e26-9484b1d8a9a9 is in state STARTED 2026-04-01 00:49:10.285797 | orchestrator | 2026-04-01 00:49:10 | INFO  | Task 933db249-6878-4db3-9d15-994592505608 is in state STARTED 2026-04-01 00:49:10.285801 | orchestrator | 2026-04-01 00:49:10 | INFO  | Task 6a54ddc3-81d7-4176-b18e-ce4bea0015d5 is in state STARTED 2026-04-01 00:49:10.285805 | orchestrator | 2026-04-01 00:49:10 | INFO  | Task 5e613a97-4ef4-4fb5-8151-42767d6c7116 is in state STARTED 2026-04-01 00:49:10.285808 | orchestrator | 2026-04-01 00:49:10 | INFO  | Task 3a2f16f6-b0cb-4f9b-96d2-57df48b698d1 is in state STARTED 2026-04-01 00:49:10.285977 | orchestrator | 2026-04-01 00:49:10 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:49:13.533640 | orchestrator | 2026-04-01 00:49:13 | INFO  | Task c9545ab1-f2c2-4292-a0b4-933b369e1102 is in state STARTED 2026-04-01 00:49:13.533742 | orchestrator | 2026-04-01 00:49:13 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:49:13.533752 | orchestrator | 2026-04-01 00:49:13 | INFO  | Task ac0beb29-009e-4439-9e26-9484b1d8a9a9 is in state STARTED 2026-04-01 00:49:13.533759 | orchestrator | 2026-04-01 00:49:13 | INFO  | Task 933db249-6878-4db3-9d15-994592505608 is in state STARTED 2026-04-01 00:49:13.534389 | orchestrator | 2026-04-01 00:49:13 | INFO  | Task 6a54ddc3-81d7-4176-b18e-ce4bea0015d5 is in state STARTED 2026-04-01 00:49:13.534441 | orchestrator | 2026-04-01 00:49:13 | INFO  | Task 5e613a97-4ef4-4fb5-8151-42767d6c7116 is in state STARTED 2026-04-01 00:49:13.534448 | orchestrator | 2026-04-01 00:49:13 | INFO  | Task 3a2f16f6-b0cb-4f9b-96d2-57df48b698d1 is in state STARTED 2026-04-01 00:49:13.534455 | orchestrator | 2026-04-01 00:49:13 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:49:16.538780 | orchestrator | 2026-04-01 00:49:16 | INFO  | Task c9545ab1-f2c2-4292-a0b4-933b369e1102 is in state STARTED 2026-04-01 00:49:16.538873 | orchestrator | 2026-04-01 00:49:16 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:49:16.538884 | orchestrator | 2026-04-01 00:49:16 | INFO  | Task ac0beb29-009e-4439-9e26-9484b1d8a9a9 is in state STARTED 2026-04-01 00:49:16.538901 | orchestrator | 2026-04-01 00:49:16 | INFO  | Task 933db249-6878-4db3-9d15-994592505608 is in state STARTED 2026-04-01 00:49:16.538908 | orchestrator | 2026-04-01 00:49:16 | INFO  | Task 6a54ddc3-81d7-4176-b18e-ce4bea0015d5 is in state STARTED 2026-04-01 00:49:16.538929 | orchestrator | 2026-04-01 00:49:16 | INFO  | Task 5e613a97-4ef4-4fb5-8151-42767d6c7116 is in state STARTED 2026-04-01 00:49:16.539494 | orchestrator | 2026-04-01 00:49:16 | INFO  | Task 3a2f16f6-b0cb-4f9b-96d2-57df48b698d1 is in state STARTED 2026-04-01 00:49:16.539523 | orchestrator | 2026-04-01 00:49:16 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:49:19.583655 | orchestrator | 2026-04-01 00:49:19 | INFO  | Task c9545ab1-f2c2-4292-a0b4-933b369e1102 is in state STARTED 2026-04-01 00:49:19.583763 | orchestrator | 2026-04-01 00:49:19 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:49:19.584215 | orchestrator | 2026-04-01 00:49:19 | INFO  | Task ac0beb29-009e-4439-9e26-9484b1d8a9a9 is in state STARTED 2026-04-01 00:49:19.585899 | orchestrator | 2026-04-01 00:49:19 | INFO  | Task 933db249-6878-4db3-9d15-994592505608 is in state STARTED 2026-04-01 00:49:19.586178 | orchestrator | 2026-04-01 00:49:19 | INFO  | Task 6a54ddc3-81d7-4176-b18e-ce4bea0015d5 is in state STARTED 2026-04-01 00:49:19.586925 | orchestrator | 2026-04-01 00:49:19 | INFO  | Task 5e613a97-4ef4-4fb5-8151-42767d6c7116 is in state STARTED 2026-04-01 00:49:19.587300 | orchestrator | 2026-04-01 00:49:19 | INFO  | Task 3a2f16f6-b0cb-4f9b-96d2-57df48b698d1 is in state STARTED 2026-04-01 00:49:19.587375 | orchestrator | 2026-04-01 00:49:19 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:49:22.686567 | orchestrator | 2026-04-01 00:49:22 | INFO  | Task c9545ab1-f2c2-4292-a0b4-933b369e1102 is in state STARTED 2026-04-01 00:49:22.686645 | orchestrator | 2026-04-01 00:49:22 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:49:22.686654 | orchestrator | 2026-04-01 00:49:22 | INFO  | Task ac0beb29-009e-4439-9e26-9484b1d8a9a9 is in state STARTED 2026-04-01 00:49:22.686661 | orchestrator | 2026-04-01 00:49:22 | INFO  | Task 933db249-6878-4db3-9d15-994592505608 is in state STARTED 2026-04-01 00:49:22.686668 | orchestrator | 2026-04-01 00:49:22 | INFO  | Task 6a54ddc3-81d7-4176-b18e-ce4bea0015d5 is in state STARTED 2026-04-01 00:49:22.686675 | orchestrator | 2026-04-01 00:49:22 | INFO  | Task 5e613a97-4ef4-4fb5-8151-42767d6c7116 is in state STARTED 2026-04-01 00:49:22.686681 | orchestrator | 2026-04-01 00:49:22 | INFO  | Task 3a2f16f6-b0cb-4f9b-96d2-57df48b698d1 is in state STARTED 2026-04-01 00:49:22.686688 | orchestrator | 2026-04-01 00:49:22 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:49:25.723781 | orchestrator | 2026-04-01 00:49:25 | INFO  | Task c9545ab1-f2c2-4292-a0b4-933b369e1102 is in state STARTED 2026-04-01 00:49:25.726119 | orchestrator | 2026-04-01 00:49:25 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:49:25.727946 | orchestrator | 2026-04-01 00:49:25 | INFO  | Task ac0beb29-009e-4439-9e26-9484b1d8a9a9 is in state STARTED 2026-04-01 00:49:25.731064 | orchestrator | 2026-04-01 00:49:25 | INFO  | Task 933db249-6878-4db3-9d15-994592505608 is in state STARTED 2026-04-01 00:49:25.732771 | orchestrator | 2026-04-01 00:49:25 | INFO  | Task 6a54ddc3-81d7-4176-b18e-ce4bea0015d5 is in state STARTED 2026-04-01 00:49:25.734702 | orchestrator | 2026-04-01 00:49:25 | INFO  | Task 5e613a97-4ef4-4fb5-8151-42767d6c7116 is in state STARTED 2026-04-01 00:49:25.737252 | orchestrator | 2026-04-01 00:49:25 | INFO  | Task 3a2f16f6-b0cb-4f9b-96d2-57df48b698d1 is in state STARTED 2026-04-01 00:49:25.737564 | orchestrator | 2026-04-01 00:49:25 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:49:28.792277 | orchestrator | 2026-04-01 00:49:28 | INFO  | Task c9545ab1-f2c2-4292-a0b4-933b369e1102 is in state STARTED 2026-04-01 00:49:28.795981 | orchestrator | 2026-04-01 00:49:28 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:49:28.797251 | orchestrator | 2026-04-01 00:49:28 | INFO  | Task ac0beb29-009e-4439-9e26-9484b1d8a9a9 is in state STARTED 2026-04-01 00:49:28.803938 | orchestrator | 2026-04-01 00:49:28 | INFO  | Task 933db249-6878-4db3-9d15-994592505608 is in state STARTED 2026-04-01 00:49:28.809766 | orchestrator | 2026-04-01 00:49:28 | INFO  | Task 6a54ddc3-81d7-4176-b18e-ce4bea0015d5 is in state STARTED 2026-04-01 00:49:28.813408 | orchestrator | 2026-04-01 00:49:28 | INFO  | Task 5e613a97-4ef4-4fb5-8151-42767d6c7116 is in state STARTED 2026-04-01 00:49:28.815977 | orchestrator | 2026-04-01 00:49:28 | INFO  | Task 3a2f16f6-b0cb-4f9b-96d2-57df48b698d1 is in state STARTED 2026-04-01 00:49:28.816039 | orchestrator | 2026-04-01 00:49:28 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:49:31.963654 | orchestrator | 2026-04-01 00:49:31 | INFO  | Task c9545ab1-f2c2-4292-a0b4-933b369e1102 is in state STARTED 2026-04-01 00:49:31.963738 | orchestrator | 2026-04-01 00:49:31 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:49:31.963745 | orchestrator | 2026-04-01 00:49:31 | INFO  | Task ac0beb29-009e-4439-9e26-9484b1d8a9a9 is in state STARTED 2026-04-01 00:49:31.963750 | orchestrator | 2026-04-01 00:49:31 | INFO  | Task 933db249-6878-4db3-9d15-994592505608 is in state STARTED 2026-04-01 00:49:31.963754 | orchestrator | 2026-04-01 00:49:31 | INFO  | Task 6a54ddc3-81d7-4176-b18e-ce4bea0015d5 is in state STARTED 2026-04-01 00:49:31.963758 | orchestrator | 2026-04-01 00:49:31 | INFO  | Task 5e613a97-4ef4-4fb5-8151-42767d6c7116 is in state STARTED 2026-04-01 00:49:31.963762 | orchestrator | 2026-04-01 00:49:31 | INFO  | Task 3a2f16f6-b0cb-4f9b-96d2-57df48b698d1 is in state STARTED 2026-04-01 00:49:31.963767 | orchestrator | 2026-04-01 00:49:31 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:49:35.092040 | orchestrator | 2026-04-01 00:49:35 | INFO  | Task c9545ab1-f2c2-4292-a0b4-933b369e1102 is in state STARTED 2026-04-01 00:49:35.092126 | orchestrator | 2026-04-01 00:49:35 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:49:35.092135 | orchestrator | 2026-04-01 00:49:35 | INFO  | Task ac0beb29-009e-4439-9e26-9484b1d8a9a9 is in state STARTED 2026-04-01 00:49:35.092143 | orchestrator | 2026-04-01 00:49:35 | INFO  | Task 933db249-6878-4db3-9d15-994592505608 is in state STARTED 2026-04-01 00:49:35.092150 | orchestrator | 2026-04-01 00:49:35 | INFO  | Task 6a54ddc3-81d7-4176-b18e-ce4bea0015d5 is in state STARTED 2026-04-01 00:49:35.092186 | orchestrator | 2026-04-01 00:49:35 | INFO  | Task 5e613a97-4ef4-4fb5-8151-42767d6c7116 is in state STARTED 2026-04-01 00:49:35.092194 | orchestrator | 2026-04-01 00:49:35 | INFO  | Task 3a2f16f6-b0cb-4f9b-96d2-57df48b698d1 is in state STARTED 2026-04-01 00:49:35.092201 | orchestrator | 2026-04-01 00:49:35 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:49:38.340594 | orchestrator | 2026-04-01 00:49:38 | INFO  | Task c9545ab1-f2c2-4292-a0b4-933b369e1102 is in state STARTED 2026-04-01 00:49:38.340698 | orchestrator | 2026-04-01 00:49:38 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:49:38.340721 | orchestrator | 2026-04-01 00:49:38 | INFO  | Task ac0beb29-009e-4439-9e26-9484b1d8a9a9 is in state STARTED 2026-04-01 00:49:38.340738 | orchestrator | 2026-04-01 00:49:38 | INFO  | Task 933db249-6878-4db3-9d15-994592505608 is in state STARTED 2026-04-01 00:49:38.340755 | orchestrator | 2026-04-01 00:49:38 | INFO  | Task 6a54ddc3-81d7-4176-b18e-ce4bea0015d5 is in state STARTED 2026-04-01 00:49:38.340772 | orchestrator | 2026-04-01 00:49:38 | INFO  | Task 5e613a97-4ef4-4fb5-8151-42767d6c7116 is in state STARTED 2026-04-01 00:49:38.340788 | orchestrator | 2026-04-01 00:49:38 | INFO  | Task 3a2f16f6-b0cb-4f9b-96d2-57df48b698d1 is in state STARTED 2026-04-01 00:49:38.340805 | orchestrator | 2026-04-01 00:49:38 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:49:41.210678 | orchestrator | 2026-04-01 00:49:41 | INFO  | Task c9545ab1-f2c2-4292-a0b4-933b369e1102 is in state STARTED 2026-04-01 00:49:41.210806 | orchestrator | 2026-04-01 00:49:41 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:49:41.212232 | orchestrator | 2026-04-01 00:49:41 | INFO  | Task ac0beb29-009e-4439-9e26-9484b1d8a9a9 is in state STARTED 2026-04-01 00:49:41.216143 | orchestrator | 2026-04-01 00:49:41 | INFO  | Task 933db249-6878-4db3-9d15-994592505608 is in state STARTED 2026-04-01 00:49:41.216222 | orchestrator | 2026-04-01 00:49:41 | INFO  | Task 6a54ddc3-81d7-4176-b18e-ce4bea0015d5 is in state SUCCESS 2026-04-01 00:49:41.220488 | orchestrator | 2026-04-01 00:49:41 | INFO  | Task 5e613a97-4ef4-4fb5-8151-42767d6c7116 is in state STARTED 2026-04-01 00:49:41.220550 | orchestrator | 2026-04-01 00:49:41 | INFO  | Task 3a2f16f6-b0cb-4f9b-96d2-57df48b698d1 is in state STARTED 2026-04-01 00:49:41.220564 | orchestrator | 2026-04-01 00:49:41 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:49:44.252048 | orchestrator | 2026-04-01 00:49:44 | INFO  | Task c9545ab1-f2c2-4292-a0b4-933b369e1102 is in state STARTED 2026-04-01 00:49:44.253004 | orchestrator | 2026-04-01 00:49:44 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:49:44.259465 | orchestrator | 2026-04-01 00:49:44 | INFO  | Task ac0beb29-009e-4439-9e26-9484b1d8a9a9 is in state STARTED 2026-04-01 00:49:44.259554 | orchestrator | 2026-04-01 00:49:44 | INFO  | Task 933db249-6878-4db3-9d15-994592505608 is in state STARTED 2026-04-01 00:49:44.260363 | orchestrator | 2026-04-01 00:49:44 | INFO  | Task 5e613a97-4ef4-4fb5-8151-42767d6c7116 is in state STARTED 2026-04-01 00:49:44.260415 | orchestrator | 2026-04-01 00:49:44 | INFO  | Task 3a2f16f6-b0cb-4f9b-96d2-57df48b698d1 is in state STARTED 2026-04-01 00:49:44.260424 | orchestrator | 2026-04-01 00:49:44 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:49:47.296670 | orchestrator | 2026-04-01 00:49:47 | INFO  | Task c9545ab1-f2c2-4292-a0b4-933b369e1102 is in state STARTED 2026-04-01 00:49:47.297486 | orchestrator | 2026-04-01 00:49:47 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:49:47.298271 | orchestrator | 2026-04-01 00:49:47 | INFO  | Task ac0beb29-009e-4439-9e26-9484b1d8a9a9 is in state STARTED 2026-04-01 00:49:47.298694 | orchestrator | 2026-04-01 00:49:47 | INFO  | Task 933db249-6878-4db3-9d15-994592505608 is in state STARTED 2026-04-01 00:49:47.299003 | orchestrator | 2026-04-01 00:49:47 | INFO  | Task 5e613a97-4ef4-4fb5-8151-42767d6c7116 is in state SUCCESS 2026-04-01 00:49:47.301264 | orchestrator | 2026-04-01 00:49:47 | INFO  | Task 3a2f16f6-b0cb-4f9b-96d2-57df48b698d1 is in state STARTED 2026-04-01 00:49:47.301315 | orchestrator | 2026-04-01 00:49:47 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:49:50.339350 | orchestrator | 2026-04-01 00:49:50 | INFO  | Task c9545ab1-f2c2-4292-a0b4-933b369e1102 is in state STARTED 2026-04-01 00:49:50.340675 | orchestrator | 2026-04-01 00:49:50 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:49:50.341675 | orchestrator | 2026-04-01 00:49:50 | INFO  | Task ac0beb29-009e-4439-9e26-9484b1d8a9a9 is in state STARTED 2026-04-01 00:49:50.342315 | orchestrator | 2026-04-01 00:49:50 | INFO  | Task 933db249-6878-4db3-9d15-994592505608 is in state STARTED 2026-04-01 00:49:50.343873 | orchestrator | 2026-04-01 00:49:50 | INFO  | Task 3a2f16f6-b0cb-4f9b-96d2-57df48b698d1 is in state STARTED 2026-04-01 00:49:50.343909 | orchestrator | 2026-04-01 00:49:50 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:49:53.386259 | orchestrator | 2026-04-01 00:49:53 | INFO  | Task c9545ab1-f2c2-4292-a0b4-933b369e1102 is in state STARTED 2026-04-01 00:49:53.387579 | orchestrator | 2026-04-01 00:49:53 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:49:53.389913 | orchestrator | 2026-04-01 00:49:53 | INFO  | Task ac0beb29-009e-4439-9e26-9484b1d8a9a9 is in state STARTED 2026-04-01 00:49:53.392071 | orchestrator | 2026-04-01 00:49:53 | INFO  | Task 933db249-6878-4db3-9d15-994592505608 is in state STARTED 2026-04-01 00:49:53.392745 | orchestrator | 2026-04-01 00:49:53 | INFO  | Task 3a2f16f6-b0cb-4f9b-96d2-57df48b698d1 is in state STARTED 2026-04-01 00:49:53.392774 | orchestrator | 2026-04-01 00:49:53 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:49:56.445991 | orchestrator | 2026-04-01 00:49:56 | INFO  | Task c9545ab1-f2c2-4292-a0b4-933b369e1102 is in state STARTED 2026-04-01 00:49:56.449042 | orchestrator | 2026-04-01 00:49:56 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:49:56.449407 | orchestrator | 2026-04-01 00:49:56 | INFO  | Task ac0beb29-009e-4439-9e26-9484b1d8a9a9 is in state STARTED 2026-04-01 00:49:56.450281 | orchestrator | 2026-04-01 00:49:56 | INFO  | Task 933db249-6878-4db3-9d15-994592505608 is in state STARTED 2026-04-01 00:49:56.451027 | orchestrator | 2026-04-01 00:49:56 | INFO  | Task 3a2f16f6-b0cb-4f9b-96d2-57df48b698d1 is in state STARTED 2026-04-01 00:49:56.451067 | orchestrator | 2026-04-01 00:49:56 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:49:59.483860 | orchestrator | 2026-04-01 00:49:59 | INFO  | Task c9545ab1-f2c2-4292-a0b4-933b369e1102 is in state STARTED 2026-04-01 00:49:59.484425 | orchestrator | 2026-04-01 00:49:59 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:49:59.484926 | orchestrator | 2026-04-01 00:49:59 | INFO  | Task ac0beb29-009e-4439-9e26-9484b1d8a9a9 is in state STARTED 2026-04-01 00:49:59.488145 | orchestrator | 2026-04-01 00:49:59 | INFO  | Task 933db249-6878-4db3-9d15-994592505608 is in state STARTED 2026-04-01 00:49:59.489258 | orchestrator | 2026-04-01 00:49:59 | INFO  | Task 3a2f16f6-b0cb-4f9b-96d2-57df48b698d1 is in state STARTED 2026-04-01 00:49:59.489280 | orchestrator | 2026-04-01 00:49:59 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:50:02.731690 | orchestrator | 2026-04-01 00:50:02 | INFO  | Task c9545ab1-f2c2-4292-a0b4-933b369e1102 is in state STARTED 2026-04-01 00:50:02.733123 | orchestrator | 2026-04-01 00:50:02 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:50:02.734117 | orchestrator | 2026-04-01 00:50:02 | INFO  | Task ac0beb29-009e-4439-9e26-9484b1d8a9a9 is in state STARTED 2026-04-01 00:50:02.735014 | orchestrator | 2026-04-01 00:50:02 | INFO  | Task 933db249-6878-4db3-9d15-994592505608 is in state STARTED 2026-04-01 00:50:02.736517 | orchestrator | 2026-04-01 00:50:02 | INFO  | Task 3a2f16f6-b0cb-4f9b-96d2-57df48b698d1 is in state STARTED 2026-04-01 00:50:02.736662 | orchestrator | 2026-04-01 00:50:02 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:50:05.774603 | orchestrator | 2026-04-01 00:50:05 | INFO  | Task f9cccee9-533c-45ba-9478-1374bd3006be is in state STARTED 2026-04-01 00:50:05.775188 | orchestrator | 2026-04-01 00:50:05 | INFO  | Task e03a4800-0d4f-4080-aa4e-5a48f5de93a3 is in state STARTED 2026-04-01 00:50:05.775773 | orchestrator | 2026-04-01 00:50:05 | INFO  | Task c9545ab1-f2c2-4292-a0b4-933b369e1102 is in state STARTED 2026-04-01 00:50:05.776478 | orchestrator | 2026-04-01 00:50:05 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:50:05.777025 | orchestrator | 2026-04-01 00:50:05 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:50:05.786241 | orchestrator | 2026-04-01 00:50:05 | INFO  | Task ac0beb29-009e-4439-9e26-9484b1d8a9a9 is in state STARTED 2026-04-01 00:50:05.790826 | orchestrator | 2026-04-01 00:50:05 | INFO  | Task a5235c33-929f-4f8f-8a33-8b35eb2dac0f is in state STARTED 2026-04-01 00:50:05.797645 | orchestrator | 2026-04-01 00:50:05 | INFO  | Task 933db249-6878-4db3-9d15-994592505608 is in state STARTED 2026-04-01 00:50:05.808561 | orchestrator | 2026-04-01 00:50:05 | INFO  | Task 3a2f16f6-b0cb-4f9b-96d2-57df48b698d1 is in state SUCCESS 2026-04-01 00:50:05.810057 | orchestrator | 2026-04-01 00:50:05.810419 | orchestrator | 2026-04-01 00:50:05.810428 | orchestrator | PLAY [Apply role homer] ******************************************************** 2026-04-01 00:50:05.810433 | orchestrator | 2026-04-01 00:50:05.810438 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2026-04-01 00:50:05.810442 | orchestrator | Wednesday 01 April 2026 00:48:57 +0000 (0:00:00.881) 0:00:00.881 ******* 2026-04-01 00:50:05.810446 | orchestrator | ok: [testbed-manager] => { 2026-04-01 00:50:05.810451 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2026-04-01 00:50:05.810456 | orchestrator | } 2026-04-01 00:50:05.810460 | orchestrator | 2026-04-01 00:50:05.810464 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2026-04-01 00:50:05.810468 | orchestrator | Wednesday 01 April 2026 00:48:57 +0000 (0:00:00.341) 0:00:01.223 ******* 2026-04-01 00:50:05.810473 | orchestrator | ok: [testbed-manager] 2026-04-01 00:50:05.810477 | orchestrator | 2026-04-01 00:50:05.810481 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2026-04-01 00:50:05.810485 | orchestrator | Wednesday 01 April 2026 00:49:00 +0000 (0:00:02.594) 0:00:03.817 ******* 2026-04-01 00:50:05.810489 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2026-04-01 00:50:05.810493 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2026-04-01 00:50:05.810497 | orchestrator | 2026-04-01 00:50:05.810501 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2026-04-01 00:50:05.810505 | orchestrator | Wednesday 01 April 2026 00:49:03 +0000 (0:00:02.607) 0:00:06.425 ******* 2026-04-01 00:50:05.810509 | orchestrator | changed: [testbed-manager] 2026-04-01 00:50:05.810522 | orchestrator | 2026-04-01 00:50:05.810526 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2026-04-01 00:50:05.810530 | orchestrator | Wednesday 01 April 2026 00:49:05 +0000 (0:00:02.566) 0:00:08.991 ******* 2026-04-01 00:50:05.810534 | orchestrator | changed: [testbed-manager] 2026-04-01 00:50:05.810537 | orchestrator | 2026-04-01 00:50:05.810541 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2026-04-01 00:50:05.810702 | orchestrator | Wednesday 01 April 2026 00:49:06 +0000 (0:00:00.923) 0:00:09.914 ******* 2026-04-01 00:50:05.810711 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2026-04-01 00:50:05.810717 | orchestrator | ok: [testbed-manager] 2026-04-01 00:50:05.810724 | orchestrator | 2026-04-01 00:50:05.810729 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2026-04-01 00:50:05.810735 | orchestrator | Wednesday 01 April 2026 00:49:33 +0000 (0:00:27.294) 0:00:37.209 ******* 2026-04-01 00:50:05.810862 | orchestrator | changed: [testbed-manager] 2026-04-01 00:50:05.810870 | orchestrator | 2026-04-01 00:50:05.810874 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 00:50:05.810878 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 00:50:05.810883 | orchestrator | 2026-04-01 00:50:05.810886 | orchestrator | 2026-04-01 00:50:05.810890 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 00:50:05.810894 | orchestrator | Wednesday 01 April 2026 00:49:37 +0000 (0:00:03.282) 0:00:40.491 ******* 2026-04-01 00:50:05.810898 | orchestrator | =============================================================================== 2026-04-01 00:50:05.810902 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 27.29s 2026-04-01 00:50:05.810910 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 3.28s 2026-04-01 00:50:05.810914 | orchestrator | osism.services.homer : Create required directories ---------------------- 2.61s 2026-04-01 00:50:05.810918 | orchestrator | osism.services.homer : Create traefik external network ------------------ 2.59s 2026-04-01 00:50:05.810922 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 2.57s 2026-04-01 00:50:05.810925 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 0.92s 2026-04-01 00:50:05.810929 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.34s 2026-04-01 00:50:05.810933 | orchestrator | 2026-04-01 00:50:05.810937 | orchestrator | 2026-04-01 00:50:05.810940 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-04-01 00:50:05.810944 | orchestrator | 2026-04-01 00:50:05.810948 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-04-01 00:50:05.810952 | orchestrator | Wednesday 01 April 2026 00:48:57 +0000 (0:00:00.373) 0:00:00.373 ******* 2026-04-01 00:50:05.810956 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-04-01 00:50:05.810961 | orchestrator | 2026-04-01 00:50:05.810965 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-04-01 00:50:05.810968 | orchestrator | Wednesday 01 April 2026 00:48:57 +0000 (0:00:00.455) 0:00:00.828 ******* 2026-04-01 00:50:05.810972 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-04-01 00:50:05.810976 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-04-01 00:50:05.810980 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-04-01 00:50:05.810984 | orchestrator | 2026-04-01 00:50:05.810988 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-04-01 00:50:05.810992 | orchestrator | Wednesday 01 April 2026 00:49:00 +0000 (0:00:02.265) 0:00:03.094 ******* 2026-04-01 00:50:05.810995 | orchestrator | changed: [testbed-manager] 2026-04-01 00:50:05.810999 | orchestrator | 2026-04-01 00:50:05.811003 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-04-01 00:50:05.811012 | orchestrator | Wednesday 01 April 2026 00:49:03 +0000 (0:00:03.101) 0:00:06.195 ******* 2026-04-01 00:50:05.811035 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-04-01 00:50:05.811040 | orchestrator | ok: [testbed-manager] 2026-04-01 00:50:05.811044 | orchestrator | 2026-04-01 00:50:05.811048 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-04-01 00:50:05.811052 | orchestrator | Wednesday 01 April 2026 00:49:37 +0000 (0:00:34.135) 0:00:40.330 ******* 2026-04-01 00:50:05.811056 | orchestrator | changed: [testbed-manager] 2026-04-01 00:50:05.811059 | orchestrator | 2026-04-01 00:50:05.811063 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-04-01 00:50:05.811067 | orchestrator | Wednesday 01 April 2026 00:49:39 +0000 (0:00:02.202) 0:00:42.533 ******* 2026-04-01 00:50:05.811071 | orchestrator | ok: [testbed-manager] 2026-04-01 00:50:05.811075 | orchestrator | 2026-04-01 00:50:05.811078 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-04-01 00:50:05.811082 | orchestrator | Wednesday 01 April 2026 00:49:40 +0000 (0:00:00.810) 0:00:43.343 ******* 2026-04-01 00:50:05.811087 | orchestrator | changed: [testbed-manager] 2026-04-01 00:50:05.811094 | orchestrator | 2026-04-01 00:50:05.811103 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-04-01 00:50:05.811110 | orchestrator | Wednesday 01 April 2026 00:49:41 +0000 (0:00:01.699) 0:00:45.043 ******* 2026-04-01 00:50:05.811116 | orchestrator | changed: [testbed-manager] 2026-04-01 00:50:05.811122 | orchestrator | 2026-04-01 00:50:05.811129 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-04-01 00:50:05.811134 | orchestrator | Wednesday 01 April 2026 00:49:42 +0000 (0:00:00.629) 0:00:45.673 ******* 2026-04-01 00:50:05.811140 | orchestrator | changed: [testbed-manager] 2026-04-01 00:50:05.811146 | orchestrator | 2026-04-01 00:50:05.811152 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-04-01 00:50:05.811159 | orchestrator | Wednesday 01 April 2026 00:49:43 +0000 (0:00:01.036) 0:00:46.709 ******* 2026-04-01 00:50:05.811165 | orchestrator | ok: [testbed-manager] 2026-04-01 00:50:05.811172 | orchestrator | 2026-04-01 00:50:05.811179 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 00:50:05.811186 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 00:50:05.811193 | orchestrator | 2026-04-01 00:50:05.811200 | orchestrator | 2026-04-01 00:50:05.811206 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 00:50:05.811212 | orchestrator | Wednesday 01 April 2026 00:49:44 +0000 (0:00:00.444) 0:00:47.154 ******* 2026-04-01 00:50:05.811216 | orchestrator | =============================================================================== 2026-04-01 00:50:05.811219 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 34.14s 2026-04-01 00:50:05.811223 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 3.10s 2026-04-01 00:50:05.811227 | orchestrator | osism.services.openstackclient : Create required directories ------------ 2.26s 2026-04-01 00:50:05.811231 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 2.20s 2026-04-01 00:50:05.811235 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 1.70s 2026-04-01 00:50:05.811238 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 1.04s 2026-04-01 00:50:05.811242 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.81s 2026-04-01 00:50:05.811246 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 0.63s 2026-04-01 00:50:05.811250 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.46s 2026-04-01 00:50:05.811254 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.44s 2026-04-01 00:50:05.811262 | orchestrator | 2026-04-01 00:50:05.811266 | orchestrator | 2026-04-01 00:50:05.811270 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-04-01 00:50:05.811273 | orchestrator | 2026-04-01 00:50:05.811278 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-04-01 00:50:05.811281 | orchestrator | Wednesday 01 April 2026 00:48:51 +0000 (0:00:00.366) 0:00:00.366 ******* 2026-04-01 00:50:05.811285 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:50:05.811289 | orchestrator | 2026-04-01 00:50:05.811293 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-04-01 00:50:05.811297 | orchestrator | Wednesday 01 April 2026 00:48:52 +0000 (0:00:01.085) 0:00:01.452 ******* 2026-04-01 00:50:05.811301 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-01 00:50:05.811343 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-01 00:50:05.811348 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-01 00:50:05.811352 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-01 00:50:05.811355 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-01 00:50:05.811359 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-01 00:50:05.811378 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-01 00:50:05.811382 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-01 00:50:05.811386 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-01 00:50:05.811390 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-01 00:50:05.811394 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-01 00:50:05.811418 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-01 00:50:05.811423 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-01 00:50:05.811427 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-01 00:50:05.811431 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-01 00:50:05.811435 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-01 00:50:05.811439 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-01 00:50:05.811443 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-01 00:50:05.811446 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-01 00:50:05.811450 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-01 00:50:05.811454 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-01 00:50:05.811458 | orchestrator | 2026-04-01 00:50:05.811462 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-04-01 00:50:05.811466 | orchestrator | Wednesday 01 April 2026 00:48:56 +0000 (0:00:03.823) 0:00:05.275 ******* 2026-04-01 00:50:05.811470 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:50:05.811474 | orchestrator | 2026-04-01 00:50:05.811478 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-04-01 00:50:05.811482 | orchestrator | Wednesday 01 April 2026 00:48:57 +0000 (0:00:01.363) 0:00:06.638 ******* 2026-04-01 00:50:05.811487 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-01 00:50:05.811498 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-01 00:50:05.811502 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-01 00:50:05.811507 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-01 00:50:05.811523 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-01 00:50:05.811528 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:50:05.811533 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-01 00:50:05.811538 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-01 00:50:05.811545 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:50:05.811554 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:50:05.811563 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:50:05.811568 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:50:05.811583 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:50:05.811588 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:50:05.811595 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:50:05.811600 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:50:05.811607 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:50:05.811612 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:50:05.811617 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:50:05.811621 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:50:05.811639 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:50:05.811644 | orchestrator | 2026-04-01 00:50:05.811649 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-04-01 00:50:05.811653 | orchestrator | Wednesday 01 April 2026 00:49:03 +0000 (0:00:05.605) 0:00:12.244 ******* 2026-04-01 00:50:05.811658 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-01 00:50:05.811665 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-01 00:50:05.811670 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:50:05.811677 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-01 00:50:05.811682 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:50:05.811687 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:50:05.811708 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-01 00:50:05.811714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:50:05.811721 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:50:05.811725 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:50:05.811730 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:50:05.811736 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:50:05.811741 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-01 00:50:05.811746 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:50:05.811751 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:50:05.811766 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:50:05.811771 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:50:05.811780 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:50:05.811785 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:50:05.811789 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:50:05.811794 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-01 00:50:05.811799 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:50:05.811804 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:50:05.811808 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:50:05.811813 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-01 00:50:05.811818 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:50:05.811822 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:50:05.811827 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:50:05.811832 | orchestrator | 2026-04-01 00:50:05.811837 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-04-01 00:50:05.811841 | orchestrator | Wednesday 01 April 2026 00:49:06 +0000 (0:00:03.727) 0:00:15.971 ******* 2026-04-01 00:50:05.811859 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-01 00:50:05.811864 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-01 00:50:05.811871 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-01 00:50:05.811876 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-01 00:50:05.811882 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:50:05.811887 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:50:05.811892 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:50:05.811909 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:50:05.811913 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:50:05.811917 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:50:05.811921 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:50:05.811925 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:50:05.811929 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:50:05.811933 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:50:05.811939 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-01 00:50:05.811943 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:50:05.811947 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:50:05.811951 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:50:05.811967 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-01 00:50:05.811972 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:50:05.811976 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:50:05.811980 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:50:05.811984 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:50:05.811988 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-01 00:50:05.811992 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:50:05.811998 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:50:05.812002 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:50:05.812008 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:50:05.812012 | orchestrator | 2026-04-01 00:50:05.812016 | orchestrator | TASK [common : Ensure /var/log/journal exists on EL10 systems] ***************** 2026-04-01 00:50:05.812020 | orchestrator | Wednesday 01 April 2026 00:49:12 +0000 (0:00:05.992) 0:00:21.964 ******* 2026-04-01 00:50:05.812024 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:50:05.812028 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:50:05.812031 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:50:05.812035 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:50:05.812039 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:50:05.812043 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:50:05.812047 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:50:05.812051 | orchestrator | 2026-04-01 00:50:05.812055 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-04-01 00:50:05.812058 | orchestrator | Wednesday 01 April 2026 00:49:14 +0000 (0:00:01.958) 0:00:23.923 ******* 2026-04-01 00:50:05.812062 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:50:05.812066 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:50:05.812070 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:50:05.812074 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:50:05.812078 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:50:05.812082 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:50:05.812095 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:50:05.812100 | orchestrator | 2026-04-01 00:50:05.812104 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-04-01 00:50:05.812108 | orchestrator | Wednesday 01 April 2026 00:49:16 +0000 (0:00:01.386) 0:00:25.309 ******* 2026-04-01 00:50:05.812111 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:50:05.812115 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:50:05.812119 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:50:05.812123 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:50:05.812127 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:50:05.812130 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:50:05.812134 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:50:05.812138 | orchestrator | 2026-04-01 00:50:05.812142 | orchestrator | TASK [common : Copying over kolla.target] ************************************** 2026-04-01 00:50:05.812146 | orchestrator | Wednesday 01 April 2026 00:49:17 +0000 (0:00:01.327) 0:00:26.636 ******* 2026-04-01 00:50:05.812150 | orchestrator | changed: [testbed-manager] 2026-04-01 00:50:05.812154 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:50:05.812157 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:50:05.812161 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:50:05.812165 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:50:05.812169 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:50:05.812172 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:50:05.812176 | orchestrator | 2026-04-01 00:50:05.812180 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-04-01 00:50:05.812184 | orchestrator | Wednesday 01 April 2026 00:49:19 +0000 (0:00:02.242) 0:00:28.879 ******* 2026-04-01 00:50:05.812188 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-01 00:50:05.812192 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-01 00:50:05.812200 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-01 00:50:05.812204 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:50:05.812208 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:50:05.812224 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-01 00:50:05.812229 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-01 00:50:05.812233 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-01 00:50:05.812237 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-01 00:50:05.812244 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:50:05.812249 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:50:05.812253 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:50:05.812257 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:50:05.812272 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:50:05.812276 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:50:05.812280 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:50:05.812287 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:50:05.812292 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:50:05.812296 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:50:05.812300 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:50:05.812315 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:50:05.812319 | orchestrator | 2026-04-01 00:50:05.812325 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-04-01 00:50:05.812329 | orchestrator | Wednesday 01 April 2026 00:49:24 +0000 (0:00:04.868) 0:00:33.747 ******* 2026-04-01 00:50:05.812333 | orchestrator | [WARNING]: Skipped 2026-04-01 00:50:05.812337 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-04-01 00:50:05.812341 | orchestrator | to this access issue: 2026-04-01 00:50:05.812345 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-04-01 00:50:05.812349 | orchestrator | directory 2026-04-01 00:50:05.812353 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-01 00:50:05.812357 | orchestrator | 2026-04-01 00:50:05.812361 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-04-01 00:50:05.812365 | orchestrator | Wednesday 01 April 2026 00:49:25 +0000 (0:00:01.096) 0:00:34.843 ******* 2026-04-01 00:50:05.812368 | orchestrator | [WARNING]: Skipped 2026-04-01 00:50:05.812372 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-04-01 00:50:05.812376 | orchestrator | to this access issue: 2026-04-01 00:50:05.812380 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-04-01 00:50:05.812384 | orchestrator | directory 2026-04-01 00:50:05.812391 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-01 00:50:05.812395 | orchestrator | 2026-04-01 00:50:05.812399 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-04-01 00:50:05.812403 | orchestrator | Wednesday 01 April 2026 00:49:27 +0000 (0:00:01.322) 0:00:36.166 ******* 2026-04-01 00:50:05.812406 | orchestrator | [WARNING]: Skipped 2026-04-01 00:50:05.812410 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-04-01 00:50:05.812414 | orchestrator | to this access issue: 2026-04-01 00:50:05.812418 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-04-01 00:50:05.812422 | orchestrator | directory 2026-04-01 00:50:05.812426 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-01 00:50:05.812429 | orchestrator | 2026-04-01 00:50:05.812433 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-04-01 00:50:05.812437 | orchestrator | Wednesday 01 April 2026 00:49:28 +0000 (0:00:01.433) 0:00:37.600 ******* 2026-04-01 00:50:05.812441 | orchestrator | [WARNING]: Skipped 2026-04-01 00:50:05.812445 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-04-01 00:50:05.812448 | orchestrator | to this access issue: 2026-04-01 00:50:05.812452 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-04-01 00:50:05.812456 | orchestrator | directory 2026-04-01 00:50:05.812460 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-01 00:50:05.812464 | orchestrator | 2026-04-01 00:50:05.812468 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-04-01 00:50:05.812471 | orchestrator | Wednesday 01 April 2026 00:49:30 +0000 (0:00:01.500) 0:00:39.101 ******* 2026-04-01 00:50:05.812475 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:50:05.812479 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:50:05.812483 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:50:05.812487 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:50:05.812491 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:50:05.812494 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:50:05.812498 | orchestrator | changed: [testbed-manager] 2026-04-01 00:50:05.812502 | orchestrator | 2026-04-01 00:50:05.812506 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-04-01 00:50:05.812511 | orchestrator | Wednesday 01 April 2026 00:49:36 +0000 (0:00:06.113) 0:00:45.215 ******* 2026-04-01 00:50:05.812515 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-01 00:50:05.812519 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-01 00:50:05.812523 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-01 00:50:05.812527 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-01 00:50:05.812531 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-01 00:50:05.812534 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-01 00:50:05.812538 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-01 00:50:05.812542 | orchestrator | 2026-04-01 00:50:05.812546 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-04-01 00:50:05.812550 | orchestrator | Wednesday 01 April 2026 00:49:40 +0000 (0:00:04.360) 0:00:49.575 ******* 2026-04-01 00:50:05.812553 | orchestrator | changed: [testbed-manager] 2026-04-01 00:50:05.812557 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:50:05.812561 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:50:05.812565 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:50:05.812569 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:50:05.812573 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:50:05.812579 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:50:05.812582 | orchestrator | 2026-04-01 00:50:05.812586 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-04-01 00:50:05.812590 | orchestrator | Wednesday 01 April 2026 00:49:42 +0000 (0:00:02.052) 0:00:51.627 ******* 2026-04-01 00:50:05.812598 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-01 00:50:05.812602 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:50:05.812606 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-01 00:50:05.812610 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:50:05.812616 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-01 00:50:05.812620 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:50:05.812624 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:50:05.812633 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-01 00:50:05.812638 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:50:05.812642 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:50:05.812646 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:50:05.812650 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-01 00:50:05.812655 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:50:05.812660 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:50:05.812666 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-01 00:50:05.812672 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:50:05.812676 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-01 00:50:05.812680 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:50:05.812685 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:50:05.812689 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:50:05.812694 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:50:05.812698 | orchestrator | 2026-04-01 00:50:05.812702 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-04-01 00:50:05.812706 | orchestrator | Wednesday 01 April 2026 00:49:44 +0000 (0:00:02.364) 0:00:53.992 ******* 2026-04-01 00:50:05.812712 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-01 00:50:05.812716 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-01 00:50:05.812720 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-01 00:50:05.812724 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-01 00:50:05.812728 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-01 00:50:05.812731 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-01 00:50:05.812735 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-01 00:50:05.812739 | orchestrator | 2026-04-01 00:50:05.812743 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-04-01 00:50:05.812747 | orchestrator | Wednesday 01 April 2026 00:49:47 +0000 (0:00:02.223) 0:00:56.215 ******* 2026-04-01 00:50:05.812751 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-01 00:50:05.812755 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-01 00:50:05.812758 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-01 00:50:05.812762 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-01 00:50:05.812768 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-01 00:50:05.812772 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-01 00:50:05.812776 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-01 00:50:05.812780 | orchestrator | 2026-04-01 00:50:05.812783 | orchestrator | TASK [service-check-containers : common | Check containers] ******************** 2026-04-01 00:50:05.812787 | orchestrator | Wednesday 01 April 2026 00:49:49 +0000 (0:00:02.822) 0:00:59.038 ******* 2026-04-01 00:50:05.812791 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-01 00:50:05.812795 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-01 00:50:05.812800 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-01 00:50:05.812805 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-01 00:50:05.812811 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:50:05.812816 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:50:05.812822 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-01 00:50:05.812826 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:50:05.812830 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-01 00:50:05.812834 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:50:05.812838 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:50:05.812846 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-01 00:50:05.812850 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:50:05.812854 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:50:05.812860 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:50:05.812864 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:50:05.812869 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:50:05.812875 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:50:05.812880 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:50:05.812884 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:50:05.812888 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:50:05.812892 | orchestrator | 2026-04-01 00:50:05.812896 | orchestrator | TASK [service-check-containers : common | Notify handlers to restart containers] *** 2026-04-01 00:50:05.812900 | orchestrator | Wednesday 01 April 2026 00:49:53 +0000 (0:00:03.594) 0:01:02.633 ******* 2026-04-01 00:50:05.812904 | orchestrator | changed: [testbed-manager] => { 2026-04-01 00:50:05.812908 | orchestrator |  "msg": "Notifying handlers" 2026-04-01 00:50:05.812912 | orchestrator | } 2026-04-01 00:50:05.812916 | orchestrator | changed: [testbed-node-0] => { 2026-04-01 00:50:05.812920 | orchestrator |  "msg": "Notifying handlers" 2026-04-01 00:50:05.812923 | orchestrator | } 2026-04-01 00:50:05.812927 | orchestrator | changed: [testbed-node-1] => { 2026-04-01 00:50:05.812931 | orchestrator |  "msg": "Notifying handlers" 2026-04-01 00:50:05.812935 | orchestrator | } 2026-04-01 00:50:05.812939 | orchestrator | changed: [testbed-node-2] => { 2026-04-01 00:50:05.812942 | orchestrator |  "msg": "Notifying handlers" 2026-04-01 00:50:05.812946 | orchestrator | } 2026-04-01 00:50:05.812950 | orchestrator | changed: [testbed-node-3] => { 2026-04-01 00:50:05.812954 | orchestrator |  "msg": "Notifying handlers" 2026-04-01 00:50:05.812957 | orchestrator | } 2026-04-01 00:50:05.812961 | orchestrator | changed: [testbed-node-4] => { 2026-04-01 00:50:05.812967 | orchestrator |  "msg": "Notifying handlers" 2026-04-01 00:50:05.812971 | orchestrator | } 2026-04-01 00:50:05.812975 | orchestrator | changed: [testbed-node-5] => { 2026-04-01 00:50:05.812978 | orchestrator |  "msg": "Notifying handlers" 2026-04-01 00:50:05.812982 | orchestrator | } 2026-04-01 00:50:05.812986 | orchestrator | 2026-04-01 00:50:05.812990 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-01 00:50:05.812994 | orchestrator | Wednesday 01 April 2026 00:49:54 +0000 (0:00:00.561) 0:01:03.194 ******* 2026-04-01 00:50:05.812998 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-01 00:50:05.813006 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:50:05.813010 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:50:05.813016 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-01 00:50:05.813020 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:50:05.813024 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:50:05.813028 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-01 00:50:05.813034 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:50:05.813040 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:50:05.813045 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-01 00:50:05.813049 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:50:05.813054 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:50:05.813058 | orchestrator | skipping: [testbed-manager] 2026-04-01 00:50:05.813062 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:50:05.813066 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-01 00:50:05.813070 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:50:05.813078 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:50:05.813082 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:50:05.813088 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-01 00:50:05.813092 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:50:05.813096 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:50:05.813100 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:50:05.813104 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:50:05.813108 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:50:05.813112 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-01 00:50:05.813118 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:50:05.813122 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:50:05.813126 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:50:05.813130 | orchestrator | 2026-04-01 00:50:05.813133 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-04-01 00:50:05.813137 | orchestrator | Wednesday 01 April 2026 00:49:55 +0000 (0:00:01.608) 0:01:04.803 ******* 2026-04-01 00:50:05.813141 | orchestrator | changed: [testbed-manager] 2026-04-01 00:50:05.813145 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:50:05.813151 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:50:05.813155 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:50:05.813158 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:50:05.813162 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:50:05.813166 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:50:05.813170 | orchestrator | 2026-04-01 00:50:05.813175 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-04-01 00:50:05.813179 | orchestrator | Wednesday 01 April 2026 00:49:57 +0000 (0:00:01.597) 0:01:06.401 ******* 2026-04-01 00:50:05.813183 | orchestrator | changed: [testbed-manager] 2026-04-01 00:50:05.813187 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:50:05.813191 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:50:05.813195 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:50:05.813199 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:50:05.813203 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:50:05.813206 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:50:05.813210 | orchestrator | 2026-04-01 00:50:05.813214 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-01 00:50:05.813218 | orchestrator | Wednesday 01 April 2026 00:49:58 +0000 (0:00:01.128) 0:01:07.530 ******* 2026-04-01 00:50:05.813222 | orchestrator | 2026-04-01 00:50:05.813226 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-01 00:50:05.813229 | orchestrator | Wednesday 01 April 2026 00:49:58 +0000 (0:00:00.081) 0:01:07.611 ******* 2026-04-01 00:50:05.813233 | orchestrator | 2026-04-01 00:50:05.813237 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-01 00:50:05.813241 | orchestrator | Wednesday 01 April 2026 00:49:58 +0000 (0:00:00.058) 0:01:07.670 ******* 2026-04-01 00:50:05.813245 | orchestrator | 2026-04-01 00:50:05.813249 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-01 00:50:05.813252 | orchestrator | Wednesday 01 April 2026 00:49:58 +0000 (0:00:00.058) 0:01:07.729 ******* 2026-04-01 00:50:05.813256 | orchestrator | 2026-04-01 00:50:05.813260 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-01 00:50:05.813264 | orchestrator | Wednesday 01 April 2026 00:49:58 +0000 (0:00:00.057) 0:01:07.787 ******* 2026-04-01 00:50:05.813268 | orchestrator | 2026-04-01 00:50:05.813272 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-01 00:50:05.813275 | orchestrator | Wednesday 01 April 2026 00:49:58 +0000 (0:00:00.057) 0:01:07.844 ******* 2026-04-01 00:50:05.813279 | orchestrator | 2026-04-01 00:50:05.813283 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-01 00:50:05.813287 | orchestrator | Wednesday 01 April 2026 00:49:58 +0000 (0:00:00.057) 0:01:07.902 ******* 2026-04-01 00:50:05.813291 | orchestrator | 2026-04-01 00:50:05.813295 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-04-01 00:50:05.813298 | orchestrator | Wednesday 01 April 2026 00:49:58 +0000 (0:00:00.080) 0:01:07.982 ******* 2026-04-01 00:50:05.813316 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_vjq_jlq4/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_vjq_jlq4/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_vjq_jlq4/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_vjq_jlq4/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Ffluentd: Bad Request (\"invalid reference format\")\\n'"} 2026-04-01 00:50:05.813326 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_5snl9e_o/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_5snl9e_o/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_5snl9e_o/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_5snl9e_o/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Ffluentd: Bad Request (\"invalid reference format\")\\n'"} 2026-04-01 00:50:05.813335 | orchestrator | fatal: [testbed-manager]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_sbcg678z/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_sbcg678z/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_sbcg678z/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_sbcg678z/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Ffluentd: Bad Request (\"invalid reference format\")\\n'"} 2026-04-01 00:50:05.813344 | orchestrator | fatal: [testbed-node-4]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_bsgijkxv/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_bsgijkxv/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_bsgijkxv/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_bsgijkxv/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Ffluentd: Bad Request (\"invalid reference format\")\\n'"} 2026-04-01 00:50:05.813354 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_xbjf3r73/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_xbjf3r73/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_xbjf3r73/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_xbjf3r73/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Ffluentd: Bad Request (\"invalid reference format\")\\n'"} 2026-04-01 00:50:05.813361 | orchestrator | fatal: [testbed-node-3]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_ugftuvkv/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_ugftuvkv/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_ugftuvkv/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_ugftuvkv/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Ffluentd: Bad Request (\"invalid reference format\")\\n'"} 2026-04-01 00:50:05.813371 | orchestrator | fatal: [testbed-node-5]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_a1iuozzk/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_a1iuozzk/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_a1iuozzk/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_a1iuozzk/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Ffluentd: Bad Request (\"invalid reference format\")\\n'"} 2026-04-01 00:50:05.813376 | orchestrator | 2026-04-01 00:50:05.813380 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 00:50:05.813387 | orchestrator | testbed-manager : ok=20  changed=13  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-04-01 00:50:05.813393 | orchestrator | testbed-node-0 : ok=16  changed=13  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-04-01 00:50:05.813397 | orchestrator | testbed-node-1 : ok=16  changed=13  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-04-01 00:50:05.813401 | orchestrator | testbed-node-2 : ok=16  changed=13  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-04-01 00:50:05.813405 | orchestrator | testbed-node-3 : ok=16  changed=13  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-04-01 00:50:05.813409 | orchestrator | testbed-node-4 : ok=16  changed=13  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-04-01 00:50:05.813413 | orchestrator | testbed-node-5 : ok=16  changed=13  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-04-01 00:50:05.813417 | orchestrator | 2026-04-01 00:50:05.813421 | orchestrator | 2026-04-01 00:50:05.813425 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 00:50:05.813429 | orchestrator | Wednesday 01 April 2026 00:50:02 +0000 (0:00:03.861) 0:01:11.843 ******* 2026-04-01 00:50:05.813433 | orchestrator | =============================================================================== 2026-04-01 00:50:05.813437 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 6.11s 2026-04-01 00:50:05.813441 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 5.99s 2026-04-01 00:50:05.813445 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 5.61s 2026-04-01 00:50:05.813450 | orchestrator | common : Copying over config.json files for services -------------------- 4.87s 2026-04-01 00:50:05.813454 | orchestrator | common : Copying over cron logrotate config file ------------------------ 4.36s 2026-04-01 00:50:05.813458 | orchestrator | common : Restart fluentd container -------------------------------------- 3.86s 2026-04-01 00:50:05.813462 | orchestrator | common : Ensuring config directories exist ------------------------------ 3.82s 2026-04-01 00:50:05.813466 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 3.73s 2026-04-01 00:50:05.813470 | orchestrator | service-check-containers : common | Check containers -------------------- 3.59s 2026-04-01 00:50:05.813474 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.82s 2026-04-01 00:50:05.813478 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.36s 2026-04-01 00:50:05.813481 | orchestrator | common : Copying over kolla.target -------------------------------------- 2.24s 2026-04-01 00:50:05.813485 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.22s 2026-04-01 00:50:05.813489 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 2.05s 2026-04-01 00:50:05.813493 | orchestrator | common : Ensure /var/log/journal exists on EL10 systems ----------------- 1.96s 2026-04-01 00:50:05.813497 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.61s 2026-04-01 00:50:05.813501 | orchestrator | common : Creating log volume -------------------------------------------- 1.60s 2026-04-01 00:50:05.813505 | orchestrator | common : Find custom fluentd output config files ------------------------ 1.50s 2026-04-01 00:50:05.813508 | orchestrator | common : Find custom fluentd format config files ------------------------ 1.43s 2026-04-01 00:50:05.813512 | orchestrator | common : Copying over /run subdirectories conf -------------------------- 1.39s 2026-04-01 00:50:05.813516 | orchestrator | 2026-04-01 00:50:05 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:50:08.858953 | orchestrator | 2026-04-01 00:50:08 | INFO  | Task f9cccee9-533c-45ba-9478-1374bd3006be is in state STARTED 2026-04-01 00:50:08.860562 | orchestrator | 2026-04-01 00:50:08 | INFO  | Task e03a4800-0d4f-4080-aa4e-5a48f5de93a3 is in state STARTED 2026-04-01 00:50:08.860982 | orchestrator | 2026-04-01 00:50:08 | INFO  | Task c9545ab1-f2c2-4292-a0b4-933b369e1102 is in state STARTED 2026-04-01 00:50:08.863515 | orchestrator | 2026-04-01 00:50:08 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:50:08.863935 | orchestrator | 2026-04-01 00:50:08 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:50:08.864678 | orchestrator | 2026-04-01 00:50:08 | INFO  | Task ac0beb29-009e-4439-9e26-9484b1d8a9a9 is in state STARTED 2026-04-01 00:50:08.865257 | orchestrator | 2026-04-01 00:50:08 | INFO  | Task a5235c33-929f-4f8f-8a33-8b35eb2dac0f is in state STARTED 2026-04-01 00:50:08.865905 | orchestrator | 2026-04-01 00:50:08 | INFO  | Task 933db249-6878-4db3-9d15-994592505608 is in state STARTED 2026-04-01 00:50:08.865938 | orchestrator | 2026-04-01 00:50:08 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:50:11.919036 | orchestrator | 2026-04-01 00:50:11 | INFO  | Task f9cccee9-533c-45ba-9478-1374bd3006be is in state STARTED 2026-04-01 00:50:11.919688 | orchestrator | 2026-04-01 00:50:11 | INFO  | Task e03a4800-0d4f-4080-aa4e-5a48f5de93a3 is in state STARTED 2026-04-01 00:50:11.921507 | orchestrator | 2026-04-01 00:50:11 | INFO  | Task c9545ab1-f2c2-4292-a0b4-933b369e1102 is in state STARTED 2026-04-01 00:50:11.922166 | orchestrator | 2026-04-01 00:50:11 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:50:11.922953 | orchestrator | 2026-04-01 00:50:11 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:50:11.923488 | orchestrator | 2026-04-01 00:50:11 | INFO  | Task ac0beb29-009e-4439-9e26-9484b1d8a9a9 is in state STARTED 2026-04-01 00:50:11.924138 | orchestrator | 2026-04-01 00:50:11 | INFO  | Task a5235c33-929f-4f8f-8a33-8b35eb2dac0f is in state STARTED 2026-04-01 00:50:11.924956 | orchestrator | 2026-04-01 00:50:11 | INFO  | Task 933db249-6878-4db3-9d15-994592505608 is in state STARTED 2026-04-01 00:50:11.924993 | orchestrator | 2026-04-01 00:50:11 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:50:14.961984 | orchestrator | 2026-04-01 00:50:14 | INFO  | Task f9cccee9-533c-45ba-9478-1374bd3006be is in state STARTED 2026-04-01 00:50:14.965434 | orchestrator | 2026-04-01 00:50:14 | INFO  | Task e03a4800-0d4f-4080-aa4e-5a48f5de93a3 is in state STARTED 2026-04-01 00:50:14.967816 | orchestrator | 2026-04-01 00:50:14 | INFO  | Task c9545ab1-f2c2-4292-a0b4-933b369e1102 is in state STARTED 2026-04-01 00:50:14.970422 | orchestrator | 2026-04-01 00:50:14 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:50:14.971616 | orchestrator | 2026-04-01 00:50:14 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:50:14.971649 | orchestrator | 2026-04-01 00:50:14 | INFO  | Task ac0beb29-009e-4439-9e26-9484b1d8a9a9 is in state STARTED 2026-04-01 00:50:14.972284 | orchestrator | 2026-04-01 00:50:14 | INFO  | Task a5235c33-929f-4f8f-8a33-8b35eb2dac0f is in state STARTED 2026-04-01 00:50:14.973780 | orchestrator | 2026-04-01 00:50:14 | INFO  | Task 933db249-6878-4db3-9d15-994592505608 is in state STARTED 2026-04-01 00:50:14.973804 | orchestrator | 2026-04-01 00:50:14 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:50:18.007648 | orchestrator | 2026-04-01 00:50:18 | INFO  | Task f9cccee9-533c-45ba-9478-1374bd3006be is in state STARTED 2026-04-01 00:50:18.009895 | orchestrator | 2026-04-01 00:50:18 | INFO  | Task e03a4800-0d4f-4080-aa4e-5a48f5de93a3 is in state STARTED 2026-04-01 00:50:18.010186 | orchestrator | 2026-04-01 00:50:18 | INFO  | Task c9545ab1-f2c2-4292-a0b4-933b369e1102 is in state STARTED 2026-04-01 00:50:18.010702 | orchestrator | 2026-04-01 00:50:18 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:50:18.014516 | orchestrator | 2026-04-01 00:50:18 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:50:18.016106 | orchestrator | 2026-04-01 00:50:18 | INFO  | Task ac0beb29-009e-4439-9e26-9484b1d8a9a9 is in state STARTED 2026-04-01 00:50:18.021867 | orchestrator | 2026-04-01 00:50:18 | INFO  | Task a5235c33-929f-4f8f-8a33-8b35eb2dac0f is in state STARTED 2026-04-01 00:50:18.022859 | orchestrator | 2026-04-01 00:50:18 | INFO  | Task 933db249-6878-4db3-9d15-994592505608 is in state STARTED 2026-04-01 00:50:18.022902 | orchestrator | 2026-04-01 00:50:18 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:50:21.045933 | orchestrator | 2026-04-01 00:50:21 | INFO  | Task f9cccee9-533c-45ba-9478-1374bd3006be is in state STARTED 2026-04-01 00:50:21.046104 | orchestrator | 2026-04-01 00:50:21 | INFO  | Task e03a4800-0d4f-4080-aa4e-5a48f5de93a3 is in state STARTED 2026-04-01 00:50:21.048611 | orchestrator | 2026-04-01 00:50:21 | INFO  | Task c9545ab1-f2c2-4292-a0b4-933b369e1102 is in state SUCCESS 2026-04-01 00:50:21.051768 | orchestrator | 2026-04-01 00:50:21 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:50:21.057505 | orchestrator | 2026-04-01 00:50:21 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:50:21.061437 | orchestrator | 2026-04-01 00:50:21 | INFO  | Task ac0beb29-009e-4439-9e26-9484b1d8a9a9 is in state STARTED 2026-04-01 00:50:21.064202 | orchestrator | 2026-04-01 00:50:21 | INFO  | Task a5235c33-929f-4f8f-8a33-8b35eb2dac0f is in state STARTED 2026-04-01 00:50:21.064616 | orchestrator | 2026-04-01 00:50:21 | INFO  | Task 933db249-6878-4db3-9d15-994592505608 is in state STARTED 2026-04-01 00:50:21.064651 | orchestrator | 2026-04-01 00:50:21 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:50:24.090686 | orchestrator | 2026-04-01 00:50:24 | INFO  | Task f9cccee9-533c-45ba-9478-1374bd3006be is in state STARTED 2026-04-01 00:50:24.092082 | orchestrator | 2026-04-01 00:50:24 | INFO  | Task e03a4800-0d4f-4080-aa4e-5a48f5de93a3 is in state SUCCESS 2026-04-01 00:50:24.093089 | orchestrator | 2026-04-01 00:50:24.093140 | orchestrator | 2026-04-01 00:50:24.093149 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2026-04-01 00:50:24.093158 | orchestrator | 2026-04-01 00:50:24.093165 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2026-04-01 00:50:24.093172 | orchestrator | Wednesday 01 April 2026 00:49:14 +0000 (0:00:00.551) 0:00:00.551 ******* 2026-04-01 00:50:24.093221 | orchestrator | ok: [testbed-manager] 2026-04-01 00:50:24.093229 | orchestrator | 2026-04-01 00:50:24.093235 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2026-04-01 00:50:24.093242 | orchestrator | Wednesday 01 April 2026 00:49:16 +0000 (0:00:01.830) 0:00:02.381 ******* 2026-04-01 00:50:24.093248 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2026-04-01 00:50:24.093255 | orchestrator | 2026-04-01 00:50:24.093262 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2026-04-01 00:50:24.093268 | orchestrator | Wednesday 01 April 2026 00:49:16 +0000 (0:00:00.630) 0:00:03.012 ******* 2026-04-01 00:50:24.093318 | orchestrator | changed: [testbed-manager] 2026-04-01 00:50:24.093325 | orchestrator | 2026-04-01 00:50:24.093332 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2026-04-01 00:50:24.093338 | orchestrator | Wednesday 01 April 2026 00:49:17 +0000 (0:00:01.163) 0:00:04.175 ******* 2026-04-01 00:50:24.093361 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2026-04-01 00:50:24.093368 | orchestrator | ok: [testbed-manager] 2026-04-01 00:50:24.093374 | orchestrator | 2026-04-01 00:50:24.093380 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2026-04-01 00:50:24.093387 | orchestrator | Wednesday 01 April 2026 00:50:14 +0000 (0:00:56.744) 0:01:00.920 ******* 2026-04-01 00:50:24.093393 | orchestrator | changed: [testbed-manager] 2026-04-01 00:50:24.093399 | orchestrator | 2026-04-01 00:50:24.093406 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 00:50:24.093415 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 00:50:24.093422 | orchestrator | 2026-04-01 00:50:24.093429 | orchestrator | 2026-04-01 00:50:24.093435 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 00:50:24.093442 | orchestrator | Wednesday 01 April 2026 00:50:17 +0000 (0:00:03.367) 0:01:04.287 ******* 2026-04-01 00:50:24.093448 | orchestrator | =============================================================================== 2026-04-01 00:50:24.093454 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 56.74s 2026-04-01 00:50:24.093461 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 3.37s 2026-04-01 00:50:24.093467 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 1.83s 2026-04-01 00:50:24.093473 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.16s 2026-04-01 00:50:24.093480 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.63s 2026-04-01 00:50:24.093486 | orchestrator | 2026-04-01 00:50:24.093493 | orchestrator | 2026-04-01 00:50:24.093499 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-01 00:50:24.093505 | orchestrator | 2026-04-01 00:50:24.093514 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-01 00:50:24.093520 | orchestrator | Wednesday 01 April 2026 00:50:11 +0000 (0:00:00.666) 0:00:00.666 ******* 2026-04-01 00:50:24.093527 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:50:24.093533 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:50:24.093540 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:50:24.093546 | orchestrator | 2026-04-01 00:50:24.093552 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-01 00:50:24.093559 | orchestrator | Wednesday 01 April 2026 00:50:12 +0000 (0:00:00.476) 0:00:01.142 ******* 2026-04-01 00:50:24.093565 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-04-01 00:50:24.093572 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-04-01 00:50:24.093578 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-04-01 00:50:24.093585 | orchestrator | 2026-04-01 00:50:24.093591 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-04-01 00:50:24.093598 | orchestrator | 2026-04-01 00:50:24.093604 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-04-01 00:50:24.093611 | orchestrator | Wednesday 01 April 2026 00:50:12 +0000 (0:00:00.689) 0:00:01.832 ******* 2026-04-01 00:50:24.093617 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:50:24.093624 | orchestrator | 2026-04-01 00:50:24.093630 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-04-01 00:50:24.093636 | orchestrator | Wednesday 01 April 2026 00:50:13 +0000 (0:00:00.718) 0:00:02.551 ******* 2026-04-01 00:50:24.093642 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-04-01 00:50:24.093648 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-04-01 00:50:24.093655 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-04-01 00:50:24.093661 | orchestrator | 2026-04-01 00:50:24.093668 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-04-01 00:50:24.093681 | orchestrator | Wednesday 01 April 2026 00:50:15 +0000 (0:00:02.093) 0:00:04.644 ******* 2026-04-01 00:50:24.093693 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-04-01 00:50:24.093699 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-04-01 00:50:24.093705 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-04-01 00:50:24.093712 | orchestrator | 2026-04-01 00:50:24.093717 | orchestrator | TASK [service-check-containers : memcached | Check containers] ***************** 2026-04-01 00:50:24.093723 | orchestrator | Wednesday 01 April 2026 00:50:17 +0000 (0:00:01.796) 0:00:06.440 ******* 2026-04-01 00:50:24.093743 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release//memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-01 00:50:24.093753 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release//memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-01 00:50:24.093760 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release//memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-01 00:50:24.093766 | orchestrator | 2026-04-01 00:50:24.093773 | orchestrator | TASK [service-check-containers : memcached | Notify handlers to restart containers] *** 2026-04-01 00:50:24.093779 | orchestrator | Wednesday 01 April 2026 00:50:18 +0000 (0:00:01.675) 0:00:08.116 ******* 2026-04-01 00:50:24.093786 | orchestrator | changed: [testbed-node-0] => { 2026-04-01 00:50:24.093792 | orchestrator |  "msg": "Notifying handlers" 2026-04-01 00:50:24.093799 | orchestrator | } 2026-04-01 00:50:24.093805 | orchestrator | changed: [testbed-node-1] => { 2026-04-01 00:50:24.093812 | orchestrator |  "msg": "Notifying handlers" 2026-04-01 00:50:24.093818 | orchestrator | } 2026-04-01 00:50:24.093825 | orchestrator | changed: [testbed-node-2] => { 2026-04-01 00:50:24.093831 | orchestrator |  "msg": "Notifying handlers" 2026-04-01 00:50:24.093837 | orchestrator | } 2026-04-01 00:50:24.093843 | orchestrator | 2026-04-01 00:50:24.093850 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-01 00:50:24.093857 | orchestrator | Wednesday 01 April 2026 00:50:19 +0000 (0:00:00.325) 0:00:08.441 ******* 2026-04-01 00:50:24.093864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release//memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-01 00:50:24.093876 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:50:24.093888 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release//memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-01 00:50:24.093896 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:50:24.093907 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release//memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-01 00:50:24.093914 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:50:24.093921 | orchestrator | 2026-04-01 00:50:24.093928 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-04-01 00:50:24.093935 | orchestrator | Wednesday 01 April 2026 00:50:20 +0000 (0:00:01.402) 0:00:09.843 ******* 2026-04-01 00:50:24.093947 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=1.6.24.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fmemcached\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_i_8ieg9m/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_i_8ieg9m/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_i_8ieg9m/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_i_8ieg9m/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=1.6.24.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fmemcached: Bad Request (\"invalid reference format\")\\n'"} 2026-04-01 00:50:24.093965 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=1.6.24.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fmemcached\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_qxx464_s/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_qxx464_s/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_qxx464_s/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_qxx464_s/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=1.6.24.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fmemcached: Bad Request (\"invalid reference format\")\\n'"} 2026-04-01 00:50:24.093981 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=1.6.24.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fmemcached\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_gqob8a00/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_gqob8a00/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_gqob8a00/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_gqob8a00/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=1.6.24.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fmemcached: Bad Request (\"invalid reference format\")\\n'"} 2026-04-01 00:50:24.093993 | orchestrator | 2026-04-01 00:50:24.094000 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 00:50:24.094011 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=1  skipped=1  rescued=0 ignored=0 2026-04-01 00:50:24.094053 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=1  skipped=1  rescued=0 ignored=0 2026-04-01 00:50:24.094061 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=1  skipped=1  rescued=0 ignored=0 2026-04-01 00:50:24.094067 | orchestrator | 2026-04-01 00:50:24.094074 | orchestrator | 2026-04-01 00:50:24.094082 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 00:50:24.094089 | orchestrator | Wednesday 01 April 2026 00:50:22 +0000 (0:00:01.590) 0:00:11.434 ******* 2026-04-01 00:50:24.094096 | orchestrator | =============================================================================== 2026-04-01 00:50:24.094103 | orchestrator | memcached : Ensuring config directories exist --------------------------- 2.09s 2026-04-01 00:50:24.094110 | orchestrator | memcached : Copying over config.json files for services ----------------- 1.80s 2026-04-01 00:50:24.094117 | orchestrator | service-check-containers : memcached | Check containers ----------------- 1.67s 2026-04-01 00:50:24.094124 | orchestrator | memcached : Restart memcached container --------------------------------- 1.59s 2026-04-01 00:50:24.094132 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.40s 2026-04-01 00:50:24.094139 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.72s 2026-04-01 00:50:24.094146 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.69s 2026-04-01 00:50:24.094153 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.48s 2026-04-01 00:50:24.094161 | orchestrator | service-check-containers : memcached | Notify handlers to restart containers --- 0.33s 2026-04-01 00:50:24.094177 | orchestrator | 2026-04-01 00:50:24 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:50:24.095862 | orchestrator | 2026-04-01 00:50:24 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:50:24.097073 | orchestrator | 2026-04-01 00:50:24 | INFO  | Task ac0beb29-009e-4439-9e26-9484b1d8a9a9 is in state SUCCESS 2026-04-01 00:50:24.097543 | orchestrator | 2026-04-01 00:50:24.097561 | orchestrator | 2026-04-01 00:50:24.097568 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-01 00:50:24.097575 | orchestrator | 2026-04-01 00:50:24.097582 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-01 00:50:24.097588 | orchestrator | Wednesday 01 April 2026 00:48:57 +0000 (0:00:00.690) 0:00:00.690 ******* 2026-04-01 00:50:24.097595 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-04-01 00:50:24.097602 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-04-01 00:50:24.097608 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-04-01 00:50:24.097615 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-04-01 00:50:24.097621 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-04-01 00:50:24.097627 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-04-01 00:50:24.097639 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-04-01 00:50:24.097646 | orchestrator | 2026-04-01 00:50:24.097652 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-04-01 00:50:24.097658 | orchestrator | 2026-04-01 00:50:24.097665 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-04-01 00:50:24.097672 | orchestrator | Wednesday 01 April 2026 00:48:59 +0000 (0:00:01.801) 0:00:02.492 ******* 2026-04-01 00:50:24.097679 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:50:24.097687 | orchestrator | 2026-04-01 00:50:24.097693 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-04-01 00:50:24.097706 | orchestrator | Wednesday 01 April 2026 00:49:00 +0000 (0:00:01.253) 0:00:03.745 ******* 2026-04-01 00:50:24.097713 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:50:24.097720 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:50:24.097727 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:50:24.097733 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:50:24.097740 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:50:24.097746 | orchestrator | ok: [testbed-manager] 2026-04-01 00:50:24.097752 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:50:24.097758 | orchestrator | 2026-04-01 00:50:24.097765 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-04-01 00:50:24.097771 | orchestrator | Wednesday 01 April 2026 00:49:03 +0000 (0:00:02.891) 0:00:06.637 ******* 2026-04-01 00:50:24.097778 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:50:24.097784 | orchestrator | ok: [testbed-manager] 2026-04-01 00:50:24.097790 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:50:24.097796 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:50:24.097803 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:50:24.097809 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:50:24.097816 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:50:24.097822 | orchestrator | 2026-04-01 00:50:24.097829 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-04-01 00:50:24.097835 | orchestrator | Wednesday 01 April 2026 00:49:07 +0000 (0:00:04.688) 0:00:11.325 ******* 2026-04-01 00:50:24.097842 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:50:24.097848 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:50:24.097855 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:50:24.097861 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:50:24.097876 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:50:24.097883 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:50:24.097889 | orchestrator | changed: [testbed-manager] 2026-04-01 00:50:24.097896 | orchestrator | 2026-04-01 00:50:24.097902 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-04-01 00:50:24.097908 | orchestrator | Wednesday 01 April 2026 00:49:09 +0000 (0:00:01.710) 0:00:13.036 ******* 2026-04-01 00:50:24.097920 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:50:24.097926 | orchestrator | changed: [testbed-manager] 2026-04-01 00:50:24.097933 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:50:24.097939 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:50:24.097946 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:50:24.097952 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:50:24.097959 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:50:24.097965 | orchestrator | 2026-04-01 00:50:24.097971 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-04-01 00:50:24.097978 | orchestrator | Wednesday 01 April 2026 00:49:20 +0000 (0:00:11.007) 0:00:24.044 ******* 2026-04-01 00:50:24.097984 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:50:24.097991 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:50:24.097997 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:50:24.098004 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:50:24.098010 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:50:24.098044 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:50:24.098051 | orchestrator | changed: [testbed-manager] 2026-04-01 00:50:24.098057 | orchestrator | 2026-04-01 00:50:24.098063 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-04-01 00:50:24.098070 | orchestrator | Wednesday 01 April 2026 00:49:57 +0000 (0:00:37.282) 0:01:01.327 ******* 2026-04-01 00:50:24.098077 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:50:24.098085 | orchestrator | 2026-04-01 00:50:24.098091 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-04-01 00:50:24.098098 | orchestrator | Wednesday 01 April 2026 00:49:59 +0000 (0:00:01.306) 0:01:02.634 ******* 2026-04-01 00:50:24.098104 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-04-01 00:50:24.098110 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-04-01 00:50:24.098117 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-04-01 00:50:24.098123 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-04-01 00:50:24.098136 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-04-01 00:50:24.098143 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-04-01 00:50:24.098150 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-04-01 00:50:24.098157 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-04-01 00:50:24.098163 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-04-01 00:50:24.098169 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-04-01 00:50:24.098176 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-04-01 00:50:24.098183 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-04-01 00:50:24.098190 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-04-01 00:50:24.098198 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-04-01 00:50:24.098209 | orchestrator | 2026-04-01 00:50:24.098216 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-04-01 00:50:24.098224 | orchestrator | Wednesday 01 April 2026 00:50:03 +0000 (0:00:03.978) 0:01:06.612 ******* 2026-04-01 00:50:24.098231 | orchestrator | ok: [testbed-manager] 2026-04-01 00:50:24.098237 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:50:24.098244 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:50:24.098251 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:50:24.098262 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:50:24.098268 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:50:24.098314 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:50:24.098321 | orchestrator | 2026-04-01 00:50:24.098328 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-04-01 00:50:24.098335 | orchestrator | Wednesday 01 April 2026 00:50:04 +0000 (0:00:01.199) 0:01:07.812 ******* 2026-04-01 00:50:24.098342 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:50:24.098349 | orchestrator | changed: [testbed-manager] 2026-04-01 00:50:24.098356 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:50:24.098363 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:50:24.098373 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:50:24.098380 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:50:24.098387 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:50:24.098393 | orchestrator | 2026-04-01 00:50:24.098401 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-04-01 00:50:24.098408 | orchestrator | Wednesday 01 April 2026 00:50:05 +0000 (0:00:01.515) 0:01:09.327 ******* 2026-04-01 00:50:24.098414 | orchestrator | ok: [testbed-manager] 2026-04-01 00:50:24.098421 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:50:24.098428 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:50:24.098435 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:50:24.098441 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:50:24.098448 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:50:24.098455 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:50:24.098462 | orchestrator | 2026-04-01 00:50:24.098469 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-04-01 00:50:24.098475 | orchestrator | Wednesday 01 April 2026 00:50:07 +0000 (0:00:01.535) 0:01:10.863 ******* 2026-04-01 00:50:24.098482 | orchestrator | ok: [testbed-manager] 2026-04-01 00:50:24.098489 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:50:24.098496 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:50:24.098502 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:50:24.098509 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:50:24.098516 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:50:24.098523 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:50:24.098529 | orchestrator | 2026-04-01 00:50:24.098536 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-04-01 00:50:24.098543 | orchestrator | Wednesday 01 April 2026 00:50:09 +0000 (0:00:01.787) 0:01:12.651 ******* 2026-04-01 00:50:24.098550 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-04-01 00:50:24.098558 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:50:24.098565 | orchestrator | 2026-04-01 00:50:24.098572 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-04-01 00:50:24.098578 | orchestrator | Wednesday 01 April 2026 00:50:10 +0000 (0:00:01.230) 0:01:13.881 ******* 2026-04-01 00:50:24.098585 | orchestrator | changed: [testbed-manager] 2026-04-01 00:50:24.098591 | orchestrator | 2026-04-01 00:50:24.098598 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-04-01 00:50:24.098604 | orchestrator | Wednesday 01 April 2026 00:50:12 +0000 (0:00:01.771) 0:01:15.653 ******* 2026-04-01 00:50:24.098611 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:50:24.098617 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:50:24.098623 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:50:24.098629 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:50:24.098635 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:50:24.098642 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:50:24.098648 | orchestrator | changed: [testbed-manager] 2026-04-01 00:50:24.098654 | orchestrator | 2026-04-01 00:50:24.098661 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 00:50:24.098673 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 00:50:24.098681 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 00:50:24.098687 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 00:50:24.098694 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 00:50:24.098705 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 00:50:24.098711 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 00:50:24.098717 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 00:50:24.098723 | orchestrator | 2026-04-01 00:50:24.098729 | orchestrator | 2026-04-01 00:50:24.098736 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 00:50:24.098743 | orchestrator | Wednesday 01 April 2026 00:50:23 +0000 (0:00:11.080) 0:01:26.733 ******* 2026-04-01 00:50:24.098749 | orchestrator | =============================================================================== 2026-04-01 00:50:24.098756 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 37.28s 2026-04-01 00:50:24.098762 | orchestrator | osism.services.netdata : Restart service netdata ----------------------- 11.08s 2026-04-01 00:50:24.098769 | orchestrator | osism.services.netdata : Add repository -------------------------------- 11.01s 2026-04-01 00:50:24.098775 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 4.69s 2026-04-01 00:50:24.098798 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 3.98s 2026-04-01 00:50:24.098805 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 2.89s 2026-04-01 00:50:24.098811 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.80s 2026-04-01 00:50:24.098817 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 1.79s 2026-04-01 00:50:24.098827 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 1.77s 2026-04-01 00:50:24.098834 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 1.71s 2026-04-01 00:50:24.098840 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.54s 2026-04-01 00:50:24.098846 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.52s 2026-04-01 00:50:24.098852 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.31s 2026-04-01 00:50:24.098858 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.25s 2026-04-01 00:50:24.098865 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.23s 2026-04-01 00:50:24.098871 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.20s 2026-04-01 00:50:24.099766 | orchestrator | 2026-04-01 00:50:24 | INFO  | Task a5235c33-929f-4f8f-8a33-8b35eb2dac0f is in state STARTED 2026-04-01 00:50:24.100501 | orchestrator | 2026-04-01 00:50:24 | INFO  | Task 933db249-6878-4db3-9d15-994592505608 is in state STARTED 2026-04-01 00:50:24.101663 | orchestrator | 2026-04-01 00:50:24 | INFO  | Task 8b84d661-106b-41f3-ba7d-6a08ff72b2d5 is in state STARTED 2026-04-01 00:50:24.101909 | orchestrator | 2026-04-01 00:50:24 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:50:27.169606 | orchestrator | 2026-04-01 00:50:27 | INFO  | Task f9cccee9-533c-45ba-9478-1374bd3006be is in state SUCCESS 2026-04-01 00:50:27.170784 | orchestrator | 2026-04-01 00:50:27.170850 | orchestrator | 2026-04-01 00:50:27.170861 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-01 00:50:27.170870 | orchestrator | 2026-04-01 00:50:27.170877 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-01 00:50:27.170883 | orchestrator | Wednesday 01 April 2026 00:50:10 +0000 (0:00:00.662) 0:00:00.662 ******* 2026-04-01 00:50:27.170889 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:50:27.170897 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:50:27.170904 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:50:27.170911 | orchestrator | 2026-04-01 00:50:27.170917 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-01 00:50:27.170923 | orchestrator | Wednesday 01 April 2026 00:50:10 +0000 (0:00:00.471) 0:00:01.134 ******* 2026-04-01 00:50:27.170930 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-04-01 00:50:27.170937 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-04-01 00:50:27.170944 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-04-01 00:50:27.170951 | orchestrator | 2026-04-01 00:50:27.170958 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-04-01 00:50:27.170965 | orchestrator | 2026-04-01 00:50:27.170972 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-04-01 00:50:27.170977 | orchestrator | Wednesday 01 April 2026 00:50:11 +0000 (0:00:00.484) 0:00:01.618 ******* 2026-04-01 00:50:27.170981 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:50:27.170985 | orchestrator | 2026-04-01 00:50:27.171002 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-04-01 00:50:27.171006 | orchestrator | Wednesday 01 April 2026 00:50:12 +0000 (0:00:00.832) 0:00:02.450 ******* 2026-04-01 00:50:27.171013 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-01 00:50:27.171020 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-01 00:50:27.171024 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-01 00:50:27.171038 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-01 00:50:27.171062 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-01 00:50:27.171067 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-01 00:50:27.171071 | orchestrator | 2026-04-01 00:50:27.171075 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-04-01 00:50:27.171080 | orchestrator | Wednesday 01 April 2026 00:50:14 +0000 (0:00:02.253) 0:00:04.704 ******* 2026-04-01 00:50:27.171084 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-01 00:50:27.171088 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-01 00:50:27.171092 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-01 00:50:27.171099 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-01 00:50:27.171109 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-01 00:50:27.171121 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-01 00:50:27.171125 | orchestrator | 2026-04-01 00:50:27.171129 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-04-01 00:50:27.171133 | orchestrator | Wednesday 01 April 2026 00:50:16 +0000 (0:00:02.357) 0:00:07.061 ******* 2026-04-01 00:50:27.171137 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-01 00:50:27.171141 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-01 00:50:27.171145 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-01 00:50:27.171151 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-01 00:50:27.171160 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-01 00:50:27.171165 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-01 00:50:27.171169 | orchestrator | 2026-04-01 00:50:27.171173 | orchestrator | TASK [service-check-containers : redis | Check containers] ********************* 2026-04-01 00:50:27.171177 | orchestrator | Wednesday 01 April 2026 00:50:19 +0000 (0:00:02.806) 0:00:09.868 ******* 2026-04-01 00:50:27.171181 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-01 00:50:27.171185 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-01 00:50:27.171189 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-01 00:50:27.171197 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-01 00:50:27.171201 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-01 00:50:27.171209 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-01 00:50:27.171213 | orchestrator | 2026-04-01 00:50:27.171217 | orchestrator | TASK [service-check-containers : redis | Notify handlers to restart containers] *** 2026-04-01 00:50:27.171221 | orchestrator | Wednesday 01 April 2026 00:50:21 +0000 (0:00:01.711) 0:00:11.579 ******* 2026-04-01 00:50:27.171225 | orchestrator | changed: [testbed-node-0] => { 2026-04-01 00:50:27.171229 | orchestrator |  "msg": "Notifying handlers" 2026-04-01 00:50:27.171233 | orchestrator | } 2026-04-01 00:50:27.171237 | orchestrator | changed: [testbed-node-1] => { 2026-04-01 00:50:27.171241 | orchestrator |  "msg": "Notifying handlers" 2026-04-01 00:50:27.171245 | orchestrator | } 2026-04-01 00:50:27.171248 | orchestrator | changed: [testbed-node-2] => { 2026-04-01 00:50:27.171252 | orchestrator |  "msg": "Notifying handlers" 2026-04-01 00:50:27.171256 | orchestrator | } 2026-04-01 00:50:27.171260 | orchestrator | 2026-04-01 00:50:27.171287 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-01 00:50:27.171293 | orchestrator | Wednesday 01 April 2026 00:50:21 +0000 (0:00:00.585) 0:00:12.164 ******* 2026-04-01 00:50:27.171299 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-04-01 00:50:27.171307 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-04-01 00:50:27.171323 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:50:27.171330 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-04-01 00:50:27.171336 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-04-01 00:50:27.171342 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:50:27.171353 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-04-01 00:50:27.171359 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-04-01 00:50:27.171366 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:50:27.171373 | orchestrator | 2026-04-01 00:50:27.171380 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-04-01 00:50:27.171388 | orchestrator | Wednesday 01 April 2026 00:50:22 +0000 (0:00:00.796) 0:00:12.961 ******* 2026-04-01 00:50:27.171394 | orchestrator | 2026-04-01 00:50:27.171412 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-04-01 00:50:27.171419 | orchestrator | Wednesday 01 April 2026 00:50:22 +0000 (0:00:00.083) 0:00:13.044 ******* 2026-04-01 00:50:27.171426 | orchestrator | 2026-04-01 00:50:27.171433 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-04-01 00:50:27.171445 | orchestrator | Wednesday 01 April 2026 00:50:22 +0000 (0:00:00.060) 0:00:13.105 ******* 2026-04-01 00:50:27.171452 | orchestrator | 2026-04-01 00:50:27.171468 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-04-01 00:50:27.171475 | orchestrator | Wednesday 01 April 2026 00:50:22 +0000 (0:00:00.074) 0:00:13.179 ******* 2026-04-01 00:50:27.171636 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=7.0.15.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fredis\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_r_abdnic/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_r_abdnic/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_r_abdnic/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_r_abdnic/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=7.0.15.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fredis: Bad Request (\"invalid reference format\")\\n'"} 2026-04-01 00:50:27.171651 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=7.0.15.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fredis\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_kf76dalh/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_kf76dalh/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_kf76dalh/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_kf76dalh/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=7.0.15.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fredis: Bad Request (\"invalid reference format\")\\n'"} 2026-04-01 00:50:27.171675 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=7.0.15.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fredis\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_dldqeuue/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_dldqeuue/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_dldqeuue/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_dldqeuue/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=7.0.15.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fredis: Bad Request (\"invalid reference format\")\\n'"} 2026-04-01 00:50:27.171681 | orchestrator | 2026-04-01 00:50:27.171686 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 00:50:27.171691 | orchestrator | testbed-node-0 : ok=8  changed=5  unreachable=0 failed=1  skipped=1  rescued=0 ignored=0 2026-04-01 00:50:27.171696 | orchestrator | testbed-node-1 : ok=8  changed=5  unreachable=0 failed=1  skipped=1  rescued=0 ignored=0 2026-04-01 00:50:27.171706 | orchestrator | testbed-node-2 : ok=8  changed=5  unreachable=0 failed=1  skipped=1  rescued=0 ignored=0 2026-04-01 00:50:27.171711 | orchestrator | 2026-04-01 00:50:27.171715 | orchestrator | 2026-04-01 00:50:27.171719 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 00:50:27.171729 | orchestrator | Wednesday 01 April 2026 00:50:25 +0000 (0:00:02.105) 0:00:15.285 ******* 2026-04-01 00:50:27.171734 | orchestrator | =============================================================================== 2026-04-01 00:50:27.171738 | orchestrator | redis : Copying over redis config files --------------------------------- 2.81s 2026-04-01 00:50:27.171743 | orchestrator | redis : Copying over default config.json files -------------------------- 2.36s 2026-04-01 00:50:27.171747 | orchestrator | redis : Ensuring config directories exist ------------------------------- 2.25s 2026-04-01 00:50:27.171751 | orchestrator | redis : Restart redis container ----------------------------------------- 2.11s 2026-04-01 00:50:27.171755 | orchestrator | service-check-containers : redis | Check containers --------------------- 1.71s 2026-04-01 00:50:27.171759 | orchestrator | redis : include_tasks --------------------------------------------------- 0.83s 2026-04-01 00:50:27.171763 | orchestrator | service-check-containers : Include tasks -------------------------------- 0.80s 2026-04-01 00:50:27.171767 | orchestrator | service-check-containers : redis | Notify handlers to restart containers --- 0.58s 2026-04-01 00:50:27.171771 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.48s 2026-04-01 00:50:27.171775 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.47s 2026-04-01 00:50:27.171778 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.22s 2026-04-01 00:50:27.171782 | orchestrator | 2026-04-01 00:50:27 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:50:27.173417 | orchestrator | 2026-04-01 00:50:27 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:50:27.174848 | orchestrator | 2026-04-01 00:50:27 | INFO  | Task a5235c33-929f-4f8f-8a33-8b35eb2dac0f is in state STARTED 2026-04-01 00:50:27.178348 | orchestrator | 2026-04-01 00:50:27 | INFO  | Task 933db249-6878-4db3-9d15-994592505608 is in state STARTED 2026-04-01 00:50:27.180018 | orchestrator | 2026-04-01 00:50:27 | INFO  | Task 8b84d661-106b-41f3-ba7d-6a08ff72b2d5 is in state STARTED 2026-04-01 00:50:27.180050 | orchestrator | 2026-04-01 00:50:27 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:50:30.223164 | orchestrator | 2026-04-01 00:50:30 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:50:30.227095 | orchestrator | 2026-04-01 00:50:30 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:50:30.228971 | orchestrator | 2026-04-01 00:50:30 | INFO  | Task a5235c33-929f-4f8f-8a33-8b35eb2dac0f is in state STARTED 2026-04-01 00:50:30.229749 | orchestrator | 2026-04-01 00:50:30 | INFO  | Task 933db249-6878-4db3-9d15-994592505608 is in state STARTED 2026-04-01 00:50:30.231153 | orchestrator | 2026-04-01 00:50:30 | INFO  | Task 8b84d661-106b-41f3-ba7d-6a08ff72b2d5 is in state STARTED 2026-04-01 00:50:30.231192 | orchestrator | 2026-04-01 00:50:30 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:50:33.274887 | orchestrator | 2026-04-01 00:50:33 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:50:33.275816 | orchestrator | 2026-04-01 00:50:33 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:50:33.277442 | orchestrator | 2026-04-01 00:50:33 | INFO  | Task a5235c33-929f-4f8f-8a33-8b35eb2dac0f is in state STARTED 2026-04-01 00:50:33.278425 | orchestrator | 2026-04-01 00:50:33 | INFO  | Task 933db249-6878-4db3-9d15-994592505608 is in state STARTED 2026-04-01 00:50:33.279225 | orchestrator | 2026-04-01 00:50:33 | INFO  | Task 8b84d661-106b-41f3-ba7d-6a08ff72b2d5 is in state STARTED 2026-04-01 00:50:33.279276 | orchestrator | 2026-04-01 00:50:33 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:50:36.343872 | orchestrator | 2026-04-01 00:50:36 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:50:36.344932 | orchestrator | 2026-04-01 00:50:36 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:50:36.347056 | orchestrator | 2026-04-01 00:50:36 | INFO  | Task a5235c33-929f-4f8f-8a33-8b35eb2dac0f is in state STARTED 2026-04-01 00:50:36.348875 | orchestrator | 2026-04-01 00:50:36 | INFO  | Task 933db249-6878-4db3-9d15-994592505608 is in state STARTED 2026-04-01 00:50:36.350118 | orchestrator | 2026-04-01 00:50:36 | INFO  | Task 8b84d661-106b-41f3-ba7d-6a08ff72b2d5 is in state STARTED 2026-04-01 00:50:36.350158 | orchestrator | 2026-04-01 00:50:36 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:50:39.379914 | orchestrator | 2026-04-01 00:50:39 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:50:39.380615 | orchestrator | 2026-04-01 00:50:39 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:50:39.383296 | orchestrator | 2026-04-01 00:50:39 | INFO  | Task a5235c33-929f-4f8f-8a33-8b35eb2dac0f is in state STARTED 2026-04-01 00:50:39.384427 | orchestrator | 2026-04-01 00:50:39 | INFO  | Task 933db249-6878-4db3-9d15-994592505608 is in state STARTED 2026-04-01 00:50:39.385221 | orchestrator | 2026-04-01 00:50:39 | INFO  | Task 8b84d661-106b-41f3-ba7d-6a08ff72b2d5 is in state STARTED 2026-04-01 00:50:39.385313 | orchestrator | 2026-04-01 00:50:39 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:50:42.414077 | orchestrator | 2026-04-01 00:50:42 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:50:42.414446 | orchestrator | 2026-04-01 00:50:42 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:50:42.415760 | orchestrator | 2026-04-01 00:50:42 | INFO  | Task a5235c33-929f-4f8f-8a33-8b35eb2dac0f is in state SUCCESS 2026-04-01 00:50:42.417823 | orchestrator | 2026-04-01 00:50:42.417857 | orchestrator | 2026-04-01 00:50:42.417862 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-01 00:50:42.417867 | orchestrator | 2026-04-01 00:50:42.417871 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-01 00:50:42.417876 | orchestrator | Wednesday 01 April 2026 00:50:11 +0000 (0:00:01.101) 0:00:01.101 ******* 2026-04-01 00:50:42.417880 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:50:42.417885 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:50:42.417889 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:50:42.417894 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:50:42.417898 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:50:42.417902 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:50:42.417906 | orchestrator | 2026-04-01 00:50:42.417910 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-01 00:50:42.417914 | orchestrator | Wednesday 01 April 2026 00:50:12 +0000 (0:00:00.973) 0:00:02.075 ******* 2026-04-01 00:50:42.417924 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-01 00:50:42.417929 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-01 00:50:42.417933 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-01 00:50:42.417938 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-01 00:50:42.418009 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-01 00:50:42.418070 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-01 00:50:42.418077 | orchestrator | 2026-04-01 00:50:42.418084 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-04-01 00:50:42.418094 | orchestrator | 2026-04-01 00:50:42.418101 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-04-01 00:50:42.418107 | orchestrator | Wednesday 01 April 2026 00:50:13 +0000 (0:00:01.721) 0:00:03.796 ******* 2026-04-01 00:50:42.418115 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:50:42.418123 | orchestrator | 2026-04-01 00:50:42.418130 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-04-01 00:50:42.418136 | orchestrator | Wednesday 01 April 2026 00:50:15 +0000 (0:00:01.620) 0:00:05.417 ******* 2026-04-01 00:50:42.418142 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-04-01 00:50:42.418149 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-04-01 00:50:42.418156 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-04-01 00:50:42.418162 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-04-01 00:50:42.418169 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-04-01 00:50:42.418175 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-04-01 00:50:42.418182 | orchestrator | 2026-04-01 00:50:42.418189 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-04-01 00:50:42.418195 | orchestrator | Wednesday 01 April 2026 00:50:16 +0000 (0:00:01.578) 0:00:06.996 ******* 2026-04-01 00:50:42.418202 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-04-01 00:50:42.418211 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-04-01 00:50:42.418220 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-04-01 00:50:42.418226 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-04-01 00:50:42.418232 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-04-01 00:50:42.418285 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-04-01 00:50:42.418291 | orchestrator | 2026-04-01 00:50:42.418298 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-04-01 00:50:42.418304 | orchestrator | Wednesday 01 April 2026 00:50:19 +0000 (0:00:02.291) 0:00:09.287 ******* 2026-04-01 00:50:42.418311 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-04-01 00:50:42.418317 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:50:42.418325 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-04-01 00:50:42.418331 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:50:42.418337 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-04-01 00:50:42.418341 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:50:42.418345 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-04-01 00:50:42.418349 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:50:42.418353 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-04-01 00:50:42.418357 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:50:42.418361 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-04-01 00:50:42.418365 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:50:42.418369 | orchestrator | 2026-04-01 00:50:42.418372 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-04-01 00:50:42.418376 | orchestrator | Wednesday 01 April 2026 00:50:20 +0000 (0:00:01.324) 0:00:10.612 ******* 2026-04-01 00:50:42.418380 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:50:42.418384 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:50:42.418388 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:50:42.418391 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:50:42.418403 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:50:42.418407 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:50:42.418411 | orchestrator | 2026-04-01 00:50:42.418415 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-04-01 00:50:42.418419 | orchestrator | Wednesday 01 April 2026 00:50:21 +0000 (0:00:01.157) 0:00:11.769 ******* 2026-04-01 00:50:42.418437 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release//openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-01 00:50:42.418450 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release//openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-01 00:50:42.418455 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release//openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-01 00:50:42.418460 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release//openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-01 00:50:42.418466 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release//openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-01 00:50:42.418485 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release//openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-01 00:50:42.418498 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release//openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-01 00:50:42.418504 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release//openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-01 00:50:42.418511 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release//openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-01 00:50:42.418517 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release//openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-01 00:50:42.418523 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release//openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-01 00:50:42.418543 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release//openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-01 00:50:42.418550 | orchestrator | 2026-04-01 00:50:42.418556 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-04-01 00:50:42.418562 | orchestrator | Wednesday 01 April 2026 00:50:23 +0000 (0:00:01.891) 0:00:13.661 ******* 2026-04-01 00:50:42.418571 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release//openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-01 00:50:42.418577 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release//openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-01 00:50:42.418583 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release//openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-01 00:50:42.418590 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release//openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-01 00:50:42.418687 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release//openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-01 00:50:42.418702 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release//openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-01 00:50:42.418707 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release//openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-01 00:50:42.418711 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release//openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-01 00:50:42.418716 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release//openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-01 00:50:42.418724 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release//openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-01 00:50:42.418732 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release//openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-01 00:50:42.418739 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release//openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-01 00:50:42.418744 | orchestrator | 2026-04-01 00:50:42.418749 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-04-01 00:50:42.418754 | orchestrator | Wednesday 01 April 2026 00:50:28 +0000 (0:00:04.535) 0:00:18.197 ******* 2026-04-01 00:50:42.418758 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:50:42.418763 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:50:42.418768 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:50:42.418772 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:50:42.418777 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:50:42.418781 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:50:42.418786 | orchestrator | 2026-04-01 00:50:42.418790 | orchestrator | TASK [service-check-containers : openvswitch | Check containers] *************** 2026-04-01 00:50:42.418795 | orchestrator | Wednesday 01 April 2026 00:50:29 +0000 (0:00:00.980) 0:00:19.178 ******* 2026-04-01 00:50:42.418800 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release//openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-01 00:50:42.418805 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release//openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-01 00:50:42.418813 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release//openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-01 00:50:42.418820 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release//openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-01 00:50:42.418828 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release//openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-01 00:50:42.418832 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release//openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-01 00:50:42.418837 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release//openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-01 00:50:42.418847 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release//openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-01 00:50:42.418852 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release//openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-01 00:50:42.418862 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release//openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-01 00:50:42.418868 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release//openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-01 00:50:42.418874 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release//openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-01 00:50:42.418879 | orchestrator | 2026-04-01 00:50:42.418884 | orchestrator | TASK [service-check-containers : openvswitch | Notify handlers to restart containers] *** 2026-04-01 00:50:42.418889 | orchestrator | Wednesday 01 April 2026 00:50:32 +0000 (0:00:03.323) 0:00:22.501 ******* 2026-04-01 00:50:42.418896 | orchestrator | changed: [testbed-node-0] => { 2026-04-01 00:50:42.418901 | orchestrator |  "msg": "Notifying handlers" 2026-04-01 00:50:42.418906 | orchestrator | } 2026-04-01 00:50:42.418911 | orchestrator | changed: [testbed-node-1] => { 2026-04-01 00:50:42.418915 | orchestrator |  "msg": "Notifying handlers" 2026-04-01 00:50:42.418920 | orchestrator | } 2026-04-01 00:50:42.418924 | orchestrator | changed: [testbed-node-2] => { 2026-04-01 00:50:42.418928 | orchestrator |  "msg": "Notifying handlers" 2026-04-01 00:50:42.418932 | orchestrator | } 2026-04-01 00:50:42.418935 | orchestrator | changed: [testbed-node-4] => { 2026-04-01 00:50:42.418939 | orchestrator |  "msg": "Notifying handlers" 2026-04-01 00:50:42.418943 | orchestrator | } 2026-04-01 00:50:42.418947 | orchestrator | changed: [testbed-node-3] => { 2026-04-01 00:50:42.418951 | orchestrator |  "msg": "Notifying handlers" 2026-04-01 00:50:42.418955 | orchestrator | } 2026-04-01 00:50:42.418958 | orchestrator | changed: [testbed-node-5] => { 2026-04-01 00:50:42.418962 | orchestrator |  "msg": "Notifying handlers" 2026-04-01 00:50:42.418966 | orchestrator | } 2026-04-01 00:50:42.418970 | orchestrator | 2026-04-01 00:50:42.418974 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-01 00:50:42.418978 | orchestrator | Wednesday 01 April 2026 00:50:33 +0000 (0:00:01.064) 0:00:23.565 ******* 2026-04-01 00:50:42.418982 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release//openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-04-01 00:50:42.418989 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release//openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-04-01 00:50:42.418994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release//openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-04-01 00:50:42.418998 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release//openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-04-01 00:50:42.419005 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:50:42.419009 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:50:42.419387 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release//openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-04-01 00:50:42.419399 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release//openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-04-01 00:50:42.419403 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release//openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-04-01 00:50:42.419414 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release//openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-04-01 00:50:42.419418 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:50:42.419422 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:50:42.419429 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release//openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-04-01 00:50:42.419437 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release//openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-04-01 00:50:42.419442 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:50:42.419446 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release//openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-04-01 00:50:42.419450 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release//openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-04-01 00:50:42.419454 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:50:42.419458 | orchestrator | 2026-04-01 00:50:42.419461 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-01 00:50:42.419465 | orchestrator | Wednesday 01 April 2026 00:50:36 +0000 (0:00:02.715) 0:00:26.280 ******* 2026-04-01 00:50:42.419469 | orchestrator | 2026-04-01 00:50:42.419473 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-01 00:50:42.419477 | orchestrator | Wednesday 01 April 2026 00:50:36 +0000 (0:00:00.132) 0:00:26.413 ******* 2026-04-01 00:50:42.419481 | orchestrator | 2026-04-01 00:50:42.419484 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-01 00:50:42.419488 | orchestrator | Wednesday 01 April 2026 00:50:36 +0000 (0:00:00.272) 0:00:26.686 ******* 2026-04-01 00:50:42.419492 | orchestrator | 2026-04-01 00:50:42.419496 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-01 00:50:42.419500 | orchestrator | Wednesday 01 April 2026 00:50:37 +0000 (0:00:00.329) 0:00:27.015 ******* 2026-04-01 00:50:42.419504 | orchestrator | 2026-04-01 00:50:42.419510 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-01 00:50:42.419514 | orchestrator | Wednesday 01 April 2026 00:50:37 +0000 (0:00:00.249) 0:00:27.265 ******* 2026-04-01 00:50:42.419518 | orchestrator | 2026-04-01 00:50:42.419522 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-01 00:50:42.419526 | orchestrator | Wednesday 01 April 2026 00:50:37 +0000 (0:00:00.149) 0:00:27.414 ******* 2026-04-01 00:50:42.419533 | orchestrator | 2026-04-01 00:50:42.419537 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-04-01 00:50:42.419540 | orchestrator | Wednesday 01 April 2026 00:50:37 +0000 (0:00:00.129) 0:00:27.544 ******* 2026-04-01 00:50:42.419548 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=3.5.1.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fopenvswitch-db-server\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_t9xfa7mr/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_t9xfa7mr/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_t9xfa7mr/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_t9xfa7mr/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=3.5.1.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fopenvswitch-db-server: Bad Request (\"invalid reference format\")\\n'"} 2026-04-01 00:50:42.419562 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=3.5.1.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fopenvswitch-db-server\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_6nshdn3a/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_6nshdn3a/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_6nshdn3a/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_6nshdn3a/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=3.5.1.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fopenvswitch-db-server: Bad Request (\"invalid reference format\")\\n'"} 2026-04-01 00:50:42.419571 | orchestrator | fatal: [testbed-node-3]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=3.5.1.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fopenvswitch-db-server\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_8sqgrr5q/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_8sqgrr5q/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_8sqgrr5q/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_8sqgrr5q/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=3.5.1.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fopenvswitch-db-server: Bad Request (\"invalid reference format\")\\n'"} 2026-04-01 00:50:42.419582 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=3.5.1.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fopenvswitch-db-server\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_31gmjs67/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_31gmjs67/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_31gmjs67/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_31gmjs67/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=3.5.1.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fopenvswitch-db-server: Bad Request (\"invalid reference format\")\\n'"} 2026-04-01 00:50:42.419595 | orchestrator | fatal: [testbed-node-4]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=3.5.1.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fopenvswitch-db-server\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_e78hsjbh/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_e78hsjbh/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_e78hsjbh/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_e78hsjbh/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=3.5.1.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fopenvswitch-db-server: Bad Request (\"invalid reference format\")\\n'"} 2026-04-01 00:50:42.419605 | orchestrator | fatal: [testbed-node-5]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=3.5.1.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fopenvswitch-db-server\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_5hus7jg9/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_5hus7jg9/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_5hus7jg9/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_5hus7jg9/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=3.5.1.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fopenvswitch-db-server: Bad Request (\"invalid reference format\")\\n'"} 2026-04-01 00:50:42.419609 | orchestrator | 2026-04-01 00:50:42.419613 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 00:50:42.419618 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=1  skipped=4  rescued=0 ignored=0 2026-04-01 00:50:42.419622 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=1  skipped=4  rescued=0 ignored=0 2026-04-01 00:50:42.419626 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=1  skipped=4  rescued=0 ignored=0 2026-04-01 00:50:42.419630 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=1  skipped=4  rescued=0 ignored=0 2026-04-01 00:50:42.419634 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=1  skipped=4  rescued=0 ignored=0 2026-04-01 00:50:42.419642 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=1  skipped=4  rescued=0 ignored=0 2026-04-01 00:50:42.419646 | orchestrator | 2026-04-01 00:50:42.419650 | orchestrator | 2026-04-01 00:50:42.419656 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 00:50:42.419660 | orchestrator | Wednesday 01 April 2026 00:50:39 +0000 (0:00:02.373) 0:00:29.917 ******* 2026-04-01 00:50:42.419664 | orchestrator | =============================================================================== 2026-04-01 00:50:42.419668 | orchestrator | openvswitch : Copying over config.json files for services --------------- 4.54s 2026-04-01 00:50:42.419671 | orchestrator | service-check-containers : openvswitch | Check containers --------------- 3.32s 2026-04-01 00:50:42.419675 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.72s 2026-04-01 00:50:42.419679 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------- 2.37s 2026-04-01 00:50:42.419685 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 2.29s 2026-04-01 00:50:42.419689 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.89s 2026-04-01 00:50:42.419693 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.72s 2026-04-01 00:50:42.419697 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.62s 2026-04-01 00:50:42.419700 | orchestrator | module-load : Load modules ---------------------------------------------- 1.58s 2026-04-01 00:50:42.419704 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.32s 2026-04-01 00:50:42.419708 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.26s 2026-04-01 00:50:42.419712 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 1.16s 2026-04-01 00:50:42.419716 | orchestrator | service-check-containers : openvswitch | Notify handlers to restart containers --- 1.06s 2026-04-01 00:50:42.419720 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 0.98s 2026-04-01 00:50:42.419724 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.97s 2026-04-01 00:50:42.419728 | orchestrator | 2026-04-01 00:50:42 | INFO  | Task 933db249-6878-4db3-9d15-994592505608 is in state STARTED 2026-04-01 00:50:42.419732 | orchestrator | 2026-04-01 00:50:42 | INFO  | Task 8b84d661-106b-41f3-ba7d-6a08ff72b2d5 is in state STARTED 2026-04-01 00:50:42.419736 | orchestrator | 2026-04-01 00:50:42 | INFO  | Task 7a53e52c-e1c6-4ae9-b923-0b46511b001a is in state STARTED 2026-04-01 00:50:42.419740 | orchestrator | 2026-04-01 00:50:42 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:50:45.447746 | orchestrator | 2026-04-01 00:50:45 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:50:45.447831 | orchestrator | 2026-04-01 00:50:45 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:50:45.447848 | orchestrator | 2026-04-01 00:50:45 | INFO  | Task 933db249-6878-4db3-9d15-994592505608 is in state STARTED 2026-04-01 00:50:45.448418 | orchestrator | 2026-04-01 00:50:45 | INFO  | Task 8b84d661-106b-41f3-ba7d-6a08ff72b2d5 is in state STARTED 2026-04-01 00:50:45.449060 | orchestrator | 2026-04-01 00:50:45 | INFO  | Task 7a53e52c-e1c6-4ae9-b923-0b46511b001a is in state STARTED 2026-04-01 00:50:45.449089 | orchestrator | 2026-04-01 00:50:45 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:50:48.482138 | orchestrator | 2026-04-01 00:50:48 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:50:48.486203 | orchestrator | 2026-04-01 00:50:48 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:50:48.489138 | orchestrator | 2026-04-01 00:50:48 | INFO  | Task 933db249-6878-4db3-9d15-994592505608 is in state STARTED 2026-04-01 00:50:48.490597 | orchestrator | 2026-04-01 00:50:48 | INFO  | Task 8b84d661-106b-41f3-ba7d-6a08ff72b2d5 is in state STARTED 2026-04-01 00:50:48.492023 | orchestrator | 2026-04-01 00:50:48 | INFO  | Task 7a53e52c-e1c6-4ae9-b923-0b46511b001a is in state STARTED 2026-04-01 00:50:48.492063 | orchestrator | 2026-04-01 00:50:48 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:50:51.529810 | orchestrator | 2026-04-01 00:50:51 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:50:51.529999 | orchestrator | 2026-04-01 00:50:51 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:50:51.530848 | orchestrator | 2026-04-01 00:50:51 | INFO  | Task 933db249-6878-4db3-9d15-994592505608 is in state STARTED 2026-04-01 00:50:51.531877 | orchestrator | 2026-04-01 00:50:51 | INFO  | Task 8b84d661-106b-41f3-ba7d-6a08ff72b2d5 is in state STARTED 2026-04-01 00:50:51.532639 | orchestrator | 2026-04-01 00:50:51 | INFO  | Task 7a53e52c-e1c6-4ae9-b923-0b46511b001a is in state STARTED 2026-04-01 00:50:51.532666 | orchestrator | 2026-04-01 00:50:51 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:50:54.568332 | orchestrator | 2026-04-01 00:50:54 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:50:54.568418 | orchestrator | 2026-04-01 00:50:54 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:50:54.570200 | orchestrator | 2026-04-01 00:50:54 | INFO  | Task 933db249-6878-4db3-9d15-994592505608 is in state STARTED 2026-04-01 00:50:54.570384 | orchestrator | 2026-04-01 00:50:54 | INFO  | Task 8b84d661-106b-41f3-ba7d-6a08ff72b2d5 is in state STARTED 2026-04-01 00:50:54.572305 | orchestrator | 2026-04-01 00:50:54 | INFO  | Task 7a53e52c-e1c6-4ae9-b923-0b46511b001a is in state STARTED 2026-04-01 00:50:54.572364 | orchestrator | 2026-04-01 00:50:54 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:50:57.595957 | orchestrator | 2026-04-01 00:50:57 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:50:57.599037 | orchestrator | 2026-04-01 00:50:57 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:50:57.599993 | orchestrator | 2026-04-01 00:50:57 | INFO  | Task 933db249-6878-4db3-9d15-994592505608 is in state STARTED 2026-04-01 00:50:57.601330 | orchestrator | 2026-04-01 00:50:57 | INFO  | Task 8b84d661-106b-41f3-ba7d-6a08ff72b2d5 is in state STARTED 2026-04-01 00:50:57.602505 | orchestrator | 2026-04-01 00:50:57 | INFO  | Task 7a53e52c-e1c6-4ae9-b923-0b46511b001a is in state SUCCESS 2026-04-01 00:50:57.602564 | orchestrator | 2026-04-01 00:50:57.603553 | orchestrator | 2026-04-01 00:50:57.603582 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-01 00:50:57.603591 | orchestrator | 2026-04-01 00:50:57.603598 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-01 00:50:57.603604 | orchestrator | Wednesday 01 April 2026 00:50:43 +0000 (0:00:00.198) 0:00:00.198 ******* 2026-04-01 00:50:57.603611 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:50:57.603619 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:50:57.603624 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:50:57.603628 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:50:57.603632 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:50:57.603636 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:50:57.603640 | orchestrator | 2026-04-01 00:50:57.603644 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-01 00:50:57.603648 | orchestrator | Wednesday 01 April 2026 00:50:43 +0000 (0:00:00.624) 0:00:00.822 ******* 2026-04-01 00:50:57.603652 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-04-01 00:50:57.603656 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-04-01 00:50:57.603673 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-04-01 00:50:57.603677 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-04-01 00:50:57.603681 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-04-01 00:50:57.603685 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-04-01 00:50:57.603688 | orchestrator | 2026-04-01 00:50:57.603692 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-04-01 00:50:57.603696 | orchestrator | 2026-04-01 00:50:57.603700 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-04-01 00:50:57.603704 | orchestrator | Wednesday 01 April 2026 00:50:44 +0000 (0:00:00.964) 0:00:01.786 ******* 2026-04-01 00:50:57.603708 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:50:57.603713 | orchestrator | 2026-04-01 00:50:57.603718 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-04-01 00:50:57.603724 | orchestrator | Wednesday 01 April 2026 00:50:45 +0000 (0:00:01.010) 0:00:02.797 ******* 2026-04-01 00:50:57.603733 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:50:57.603743 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:50:57.603751 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:50:57.603758 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:50:57.603772 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:50:57.603792 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:50:57.603799 | orchestrator | 2026-04-01 00:50:57.603809 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-04-01 00:50:57.603813 | orchestrator | Wednesday 01 April 2026 00:50:47 +0000 (0:00:02.041) 0:00:04.838 ******* 2026-04-01 00:50:57.603817 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:50:57.603821 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:50:57.603825 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:50:57.603829 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:50:57.603833 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:50:57.603837 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:50:57.603841 | orchestrator | 2026-04-01 00:50:57.603845 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-04-01 00:50:57.603848 | orchestrator | Wednesday 01 April 2026 00:50:49 +0000 (0:00:01.380) 0:00:06.219 ******* 2026-04-01 00:50:57.603854 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:50:57.603858 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:50:57.603869 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:50:57.603874 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:50:57.603878 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:50:57.603881 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:50:57.603885 | orchestrator | 2026-04-01 00:50:57.603889 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-04-01 00:50:57.603893 | orchestrator | Wednesday 01 April 2026 00:50:50 +0000 (0:00:01.395) 0:00:07.614 ******* 2026-04-01 00:50:57.603897 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:50:57.603901 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:50:57.603905 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:50:57.603911 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:50:57.603918 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:50:57.603924 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:50:57.603928 | orchestrator | 2026-04-01 00:50:57.603932 | orchestrator | TASK [service-check-containers : ovn_controller | Check containers] ************ 2026-04-01 00:50:57.603936 | orchestrator | Wednesday 01 April 2026 00:50:52 +0000 (0:00:01.717) 0:00:09.332 ******* 2026-04-01 00:50:57.603941 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:50:57.603948 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:50:57.603954 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:50:57.603960 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:50:57.603966 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:50:57.603972 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 00:50:57.603986 | orchestrator | 2026-04-01 00:50:57.603993 | orchestrator | TASK [service-check-containers : ovn_controller | Notify handlers to restart containers] *** 2026-04-01 00:50:57.603999 | orchestrator | Wednesday 01 April 2026 00:50:53 +0000 (0:00:01.620) 0:00:10.952 ******* 2026-04-01 00:50:57.604006 | orchestrator | changed: [testbed-node-0] => { 2026-04-01 00:50:57.604016 | orchestrator |  "msg": "Notifying handlers" 2026-04-01 00:50:57.604023 | orchestrator | } 2026-04-01 00:50:57.604030 | orchestrator | changed: [testbed-node-1] => { 2026-04-01 00:50:57.604039 | orchestrator |  "msg": "Notifying handlers" 2026-04-01 00:50:57.604046 | orchestrator | } 2026-04-01 00:50:57.604053 | orchestrator | changed: [testbed-node-2] => { 2026-04-01 00:50:57.604059 | orchestrator |  "msg": "Notifying handlers" 2026-04-01 00:50:57.604065 | orchestrator | } 2026-04-01 00:50:57.604071 | orchestrator | changed: [testbed-node-3] => { 2026-04-01 00:50:57.604076 | orchestrator |  "msg": "Notifying handlers" 2026-04-01 00:50:57.604082 | orchestrator | } 2026-04-01 00:50:57.604088 | orchestrator | changed: [testbed-node-4] => { 2026-04-01 00:50:57.604093 | orchestrator |  "msg": "Notifying handlers" 2026-04-01 00:50:57.604099 | orchestrator | } 2026-04-01 00:50:57.604106 | orchestrator | changed: [testbed-node-5] => { 2026-04-01 00:50:57.604112 | orchestrator |  "msg": "Notifying handlers" 2026-04-01 00:50:57.604119 | orchestrator | } 2026-04-01 00:50:57.604125 | orchestrator | 2026-04-01 00:50:57.604132 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-01 00:50:57.604140 | orchestrator | Wednesday 01 April 2026 00:50:54 +0000 (0:00:00.646) 0:00:11.598 ******* 2026-04-01 00:50:57.604144 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:50:57.604148 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:50:57.604152 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:50:57.604156 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:50:57.604160 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:50:57.604164 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:50:57.604167 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:50:57.604171 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:50:57.604175 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:50:57.604183 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:50:57.604187 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:50:57.604191 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:50:57.604205 | orchestrator | 2026-04-01 00:50:57.604210 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-04-01 00:50:57.604213 | orchestrator | Wednesday 01 April 2026 00:50:55 +0000 (0:00:01.170) 0:00:12.769 ******* 2026-04-01 00:50:57.604217 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "msg": "kolla_toolbox container is missing or not running!"} 2026-04-01 00:50:57.604221 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "msg": "kolla_toolbox container is missing or not running!"} 2026-04-01 00:50:57.604228 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "msg": "kolla_toolbox container is missing or not running!"} 2026-04-01 00:50:57.604232 | orchestrator | fatal: [testbed-node-3]: FAILED! => {"changed": false, "msg": "kolla_toolbox container is missing or not running!"} 2026-04-01 00:50:57.604235 | orchestrator | fatal: [testbed-node-4]: FAILED! => {"changed": false, "msg": "kolla_toolbox container is missing or not running!"} 2026-04-01 00:50:57.604239 | orchestrator | fatal: [testbed-node-5]: FAILED! => {"changed": false, "msg": "kolla_toolbox container is missing or not running!"} 2026-04-01 00:50:57.604243 | orchestrator | 2026-04-01 00:50:57.604247 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 00:50:57.604253 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=1  skipped=1  rescued=0 ignored=0 2026-04-01 00:50:57.604258 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=1  skipped=1  rescued=0 ignored=0 2026-04-01 00:50:57.604261 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=1  skipped=1  rescued=0 ignored=0 2026-04-01 00:50:57.604265 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=1  skipped=1  rescued=0 ignored=0 2026-04-01 00:50:57.604269 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=1  skipped=1  rescued=0 ignored=0 2026-04-01 00:50:57.604273 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=1  skipped=1  rescued=0 ignored=0 2026-04-01 00:50:57.604277 | orchestrator | 2026-04-01 00:50:57.604281 | orchestrator | 2026-04-01 00:50:57.604284 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 00:50:57.604288 | orchestrator | Wednesday 01 April 2026 00:50:56 +0000 (0:00:01.228) 0:00:13.998 ******* 2026-04-01 00:50:57.604292 | orchestrator | =============================================================================== 2026-04-01 00:50:57.604296 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 2.04s 2026-04-01 00:50:57.604300 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.72s 2026-04-01 00:50:57.604304 | orchestrator | service-check-containers : ovn_controller | Check containers ------------ 1.62s 2026-04-01 00:50:57.604313 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.40s 2026-04-01 00:50:57.604320 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.38s 2026-04-01 00:50:57.604327 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 1.23s 2026-04-01 00:50:57.604333 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.17s 2026-04-01 00:50:57.604339 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.01s 2026-04-01 00:50:57.604345 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.96s 2026-04-01 00:50:57.604350 | orchestrator | service-check-containers : ovn_controller | Notify handlers to restart containers --- 0.65s 2026-04-01 00:50:57.604356 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.62s 2026-04-01 00:50:57.604362 | orchestrator | 2026-04-01 00:50:57 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:51:00.636390 | orchestrator | 2026-04-01 00:51:00 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:51:00.637717 | orchestrator | 2026-04-01 00:51:00 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:51:00.639485 | orchestrator | 2026-04-01 00:51:00 | INFO  | Task 933db249-6878-4db3-9d15-994592505608 is in state STARTED 2026-04-01 00:51:00.641522 | orchestrator | 2026-04-01 00:51:00 | INFO  | Task 8b84d661-106b-41f3-ba7d-6a08ff72b2d5 is in state STARTED 2026-04-01 00:51:00.641561 | orchestrator | 2026-04-01 00:51:00 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:51:03.680538 | orchestrator | 2026-04-01 00:51:03 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:51:03.684491 | orchestrator | 2026-04-01 00:51:03 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:51:03.688418 | orchestrator | 2026-04-01 00:51:03 | INFO  | Task 933db249-6878-4db3-9d15-994592505608 is in state STARTED 2026-04-01 00:51:03.692419 | orchestrator | 2026-04-01 00:51:03 | INFO  | Task 8b84d661-106b-41f3-ba7d-6a08ff72b2d5 is in state SUCCESS 2026-04-01 00:51:03.694533 | orchestrator | 2026-04-01 00:51:03.694636 | orchestrator | 2026-04-01 00:51:03.694648 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2026-04-01 00:51:03.694657 | orchestrator | 2026-04-01 00:51:03.694664 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-04-01 00:51:03.694672 | orchestrator | Wednesday 01 April 2026 00:50:28 +0000 (0:00:00.160) 0:00:00.160 ******* 2026-04-01 00:51:03.694680 | orchestrator | ok: [localhost] => { 2026-04-01 00:51:03.694689 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2026-04-01 00:51:03.694696 | orchestrator | } 2026-04-01 00:51:03.694703 | orchestrator | 2026-04-01 00:51:03.694708 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2026-04-01 00:51:03.694714 | orchestrator | Wednesday 01 April 2026 00:50:28 +0000 (0:00:00.051) 0:00:00.211 ******* 2026-04-01 00:51:03.694721 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2026-04-01 00:51:03.694729 | orchestrator | ...ignoring 2026-04-01 00:51:03.694736 | orchestrator | 2026-04-01 00:51:03.694741 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2026-04-01 00:51:03.694748 | orchestrator | Wednesday 01 April 2026 00:50:32 +0000 (0:00:04.391) 0:00:04.603 ******* 2026-04-01 00:51:03.694754 | orchestrator | skipping: [localhost] 2026-04-01 00:51:03.694760 | orchestrator | 2026-04-01 00:51:03.694767 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2026-04-01 00:51:03.694774 | orchestrator | Wednesday 01 April 2026 00:50:33 +0000 (0:00:00.118) 0:00:04.721 ******* 2026-04-01 00:51:03.694781 | orchestrator | ok: [localhost] 2026-04-01 00:51:03.694815 | orchestrator | 2026-04-01 00:51:03.694822 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-01 00:51:03.694829 | orchestrator | 2026-04-01 00:51:03.694835 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-01 00:51:03.694841 | orchestrator | Wednesday 01 April 2026 00:50:33 +0000 (0:00:00.479) 0:00:05.201 ******* 2026-04-01 00:51:03.694847 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:51:03.694854 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:51:03.694874 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:51:03.694887 | orchestrator | 2026-04-01 00:51:03.694893 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-01 00:51:03.694899 | orchestrator | Wednesday 01 April 2026 00:50:34 +0000 (0:00:00.536) 0:00:05.737 ******* 2026-04-01 00:51:03.694905 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-04-01 00:51:03.694911 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-04-01 00:51:03.694917 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-04-01 00:51:03.694924 | orchestrator | 2026-04-01 00:51:03.694930 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-04-01 00:51:03.694936 | orchestrator | 2026-04-01 00:51:03.694942 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-04-01 00:51:03.694949 | orchestrator | Wednesday 01 April 2026 00:50:35 +0000 (0:00:01.219) 0:00:06.957 ******* 2026-04-01 00:51:03.694955 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:51:03.694962 | orchestrator | 2026-04-01 00:51:03.695050 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-04-01 00:51:03.695066 | orchestrator | Wednesday 01 April 2026 00:50:36 +0000 (0:00:01.354) 0:00:08.311 ******* 2026-04-01 00:51:03.695072 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:51:03.695078 | orchestrator | 2026-04-01 00:51:03.695084 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-04-01 00:51:03.695090 | orchestrator | Wednesday 01 April 2026 00:50:38 +0000 (0:00:01.396) 0:00:09.707 ******* 2026-04-01 00:51:03.695096 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:51:03.695103 | orchestrator | 2026-04-01 00:51:03.695110 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-04-01 00:51:03.695117 | orchestrator | Wednesday 01 April 2026 00:50:38 +0000 (0:00:00.692) 0:00:10.400 ******* 2026-04-01 00:51:03.695124 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:51:03.695129 | orchestrator | 2026-04-01 00:51:03.695135 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-04-01 00:51:03.695142 | orchestrator | Wednesday 01 April 2026 00:50:39 +0000 (0:00:00.464) 0:00:10.865 ******* 2026-04-01 00:51:03.695148 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:51:03.695154 | orchestrator | 2026-04-01 00:51:03.695161 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-04-01 00:51:03.695167 | orchestrator | Wednesday 01 April 2026 00:50:39 +0000 (0:00:00.251) 0:00:11.116 ******* 2026-04-01 00:51:03.695173 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:51:03.695233 | orchestrator | 2026-04-01 00:51:03.695240 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-04-01 00:51:03.695246 | orchestrator | Wednesday 01 April 2026 00:50:39 +0000 (0:00:00.237) 0:00:11.353 ******* 2026-04-01 00:51:03.695253 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:51:03.695259 | orchestrator | 2026-04-01 00:51:03.695266 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-04-01 00:51:03.695272 | orchestrator | Wednesday 01 April 2026 00:50:40 +0000 (0:00:00.488) 0:00:11.842 ******* 2026-04-01 00:51:03.695278 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:51:03.695284 | orchestrator | 2026-04-01 00:51:03.695290 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-04-01 00:51:03.695307 | orchestrator | Wednesday 01 April 2026 00:50:40 +0000 (0:00:00.660) 0:00:12.503 ******* 2026-04-01 00:51:03.695314 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:51:03.695321 | orchestrator | 2026-04-01 00:51:03.695328 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-04-01 00:51:03.695334 | orchestrator | Wednesday 01 April 2026 00:50:41 +0000 (0:00:00.498) 0:00:13.001 ******* 2026-04-01 00:51:03.695341 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:51:03.695348 | orchestrator | 2026-04-01 00:51:03.695371 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-04-01 00:51:03.695377 | orchestrator | Wednesday 01 April 2026 00:50:41 +0000 (0:00:00.227) 0:00:13.229 ******* 2026-04-01 00:51:03.695395 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-01 00:51:03.695406 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-01 00:51:03.695413 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-01 00:51:03.695419 | orchestrator | 2026-04-01 00:51:03.695432 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-04-01 00:51:03.695438 | orchestrator | Wednesday 01 April 2026 00:50:42 +0000 (0:00:01.044) 0:00:14.273 ******* 2026-04-01 00:51:03.695455 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-01 00:51:03.695462 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-01 00:51:03.695469 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-01 00:51:03.695476 | orchestrator | 2026-04-01 00:51:03.695481 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-04-01 00:51:03.695487 | orchestrator | Wednesday 01 April 2026 00:50:43 +0000 (0:00:01.307) 0:00:15.581 ******* 2026-04-01 00:51:03.695493 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-04-01 00:51:03.695499 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-04-01 00:51:03.695505 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-04-01 00:51:03.695516 | orchestrator | 2026-04-01 00:51:03.695522 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-04-01 00:51:03.695528 | orchestrator | Wednesday 01 April 2026 00:50:45 +0000 (0:00:01.624) 0:00:17.205 ******* 2026-04-01 00:51:03.695534 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-04-01 00:51:03.695539 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-04-01 00:51:03.695545 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-04-01 00:51:03.695550 | orchestrator | 2026-04-01 00:51:03.695556 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-04-01 00:51:03.695561 | orchestrator | Wednesday 01 April 2026 00:50:48 +0000 (0:00:02.596) 0:00:19.802 ******* 2026-04-01 00:51:03.695568 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-04-01 00:51:03.695574 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-04-01 00:51:03.695581 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-04-01 00:51:03.695586 | orchestrator | 2026-04-01 00:51:03.695596 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-04-01 00:51:03.695602 | orchestrator | Wednesday 01 April 2026 00:50:49 +0000 (0:00:01.097) 0:00:20.900 ******* 2026-04-01 00:51:03.695607 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-04-01 00:51:03.695618 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-04-01 00:51:03.695624 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-04-01 00:51:03.695630 | orchestrator | 2026-04-01 00:51:03.695636 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-04-01 00:51:03.695641 | orchestrator | Wednesday 01 April 2026 00:50:50 +0000 (0:00:01.464) 0:00:22.365 ******* 2026-04-01 00:51:03.695647 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-04-01 00:51:03.695652 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-04-01 00:51:03.695658 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-04-01 00:51:03.695663 | orchestrator | 2026-04-01 00:51:03.695669 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-04-01 00:51:03.695674 | orchestrator | Wednesday 01 April 2026 00:50:51 +0000 (0:00:01.202) 0:00:23.568 ******* 2026-04-01 00:51:03.695680 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-04-01 00:51:03.695686 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-04-01 00:51:03.695692 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-04-01 00:51:03.695698 | orchestrator | 2026-04-01 00:51:03.695703 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-04-01 00:51:03.695709 | orchestrator | Wednesday 01 April 2026 00:50:53 +0000 (0:00:01.541) 0:00:25.109 ******* 2026-04-01 00:51:03.695714 | orchestrator | included: /ansible/roles/rabbitmq/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:51:03.695720 | orchestrator | 2026-04-01 00:51:03.695726 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over extra CA certificates] ******* 2026-04-01 00:51:03.695731 | orchestrator | Wednesday 01 April 2026 00:50:54 +0000 (0:00:00.565) 0:00:25.675 ******* 2026-04-01 00:51:03.695738 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-01 00:51:03.695750 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-01 00:51:03.695768 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-01 00:51:03.695775 | orchestrator | 2026-04-01 00:51:03.695781 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over backend internal TLS certificate] *** 2026-04-01 00:51:03.695787 | orchestrator | Wednesday 01 April 2026 00:50:55 +0000 (0:00:01.556) 0:00:27.231 ******* 2026-04-01 00:51:03.695794 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-01 00:51:03.695805 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:51:03.695812 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-01 00:51:03.695817 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:51:03.695829 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-01 00:51:03.695839 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:51:03.695845 | orchestrator | 2026-04-01 00:51:03.695850 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over backend internal TLS key] **** 2026-04-01 00:51:03.695855 | orchestrator | Wednesday 01 April 2026 00:50:55 +0000 (0:00:00.336) 0:00:27.567 ******* 2026-04-01 00:51:03.695862 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-01 00:51:03.695872 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-01 00:51:03.695878 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:51:03.695883 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:51:03.695889 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-01 00:51:03.695895 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:51:03.695900 | orchestrator | 2026-04-01 00:51:03.695905 | orchestrator | TASK [service-check-containers : rabbitmq | Check containers] ****************** 2026-04-01 00:51:03.695911 | orchestrator | Wednesday 01 April 2026 00:50:56 +0000 (0:00:00.670) 0:00:28.238 ******* 2026-04-01 00:51:03.695925 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-01 00:51:03.695932 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-01 00:51:03.695944 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-01 00:51:03.695951 | orchestrator | 2026-04-01 00:51:03.695957 | orchestrator | TASK [service-check-containers : rabbitmq | Notify handlers to restart containers] *** 2026-04-01 00:51:03.695962 | orchestrator | Wednesday 01 April 2026 00:50:57 +0000 (0:00:01.028) 0:00:29.266 ******* 2026-04-01 00:51:03.695968 | orchestrator | changed: [testbed-node-0] => { 2026-04-01 00:51:03.695974 | orchestrator |  "msg": "Notifying handlers" 2026-04-01 00:51:03.695979 | orchestrator | } 2026-04-01 00:51:03.695985 | orchestrator | changed: [testbed-node-1] => { 2026-04-01 00:51:03.695990 | orchestrator |  "msg": "Notifying handlers" 2026-04-01 00:51:03.695996 | orchestrator | } 2026-04-01 00:51:03.696001 | orchestrator | changed: [testbed-node-2] => { 2026-04-01 00:51:03.696007 | orchestrator |  "msg": "Notifying handlers" 2026-04-01 00:51:03.696012 | orchestrator | } 2026-04-01 00:51:03.696018 | orchestrator | 2026-04-01 00:51:03.696023 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-01 00:51:03.696029 | orchestrator | Wednesday 01 April 2026 00:50:57 +0000 (0:00:00.324) 0:00:29.590 ******* 2026-04-01 00:51:03.696050 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-01 00:51:03.696057 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:51:03.696069 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-01 00:51:03.696076 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:51:03.696082 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-01 00:51:03.696088 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:51:03.696093 | orchestrator | 2026-04-01 00:51:03.696100 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-04-01 00:51:03.696105 | orchestrator | Wednesday 01 April 2026 00:50:58 +0000 (0:00:00.777) 0:00:30.368 ******* 2026-04-01 00:51:03.696111 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:51:03.696116 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:51:03.696123 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:51:03.696128 | orchestrator | 2026-04-01 00:51:03.696135 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-04-01 00:51:03.696140 | orchestrator | Wednesday 01 April 2026 00:50:59 +0000 (0:00:00.794) 0:00:31.163 ******* 2026-04-01 00:51:03.696158 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=4.1.8.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Frabbitmq\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_ls3a0asu/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_ls3a0asu/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_ls3a0asu/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=4.1.8.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Frabbitmq: Bad Request (\"invalid reference format\")\\n'"} 2026-04-01 00:51:03.696170 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=4.1.8.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Frabbitmq\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_5heys8_o/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_5heys8_o/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_5heys8_o/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=4.1.8.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Frabbitmq: Bad Request (\"invalid reference format\")\\n'"} 2026-04-01 00:51:03.696210 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=4.1.8.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Frabbitmq\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_vsuv16xx/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_vsuv16xx/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_vsuv16xx/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=4.1.8.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Frabbitmq: Bad Request (\"invalid reference format\")\\n'"} 2026-04-01 00:51:03.696224 | orchestrator | 2026-04-01 00:51:03.696230 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 00:51:03.696237 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-04-01 00:51:03.696244 | orchestrator | testbed-node-0 : ok=19  changed=12  unreachable=0 failed=1  skipped=9  rescued=0 ignored=0 2026-04-01 00:51:03.696250 | orchestrator | testbed-node-1 : ok=17  changed=12  unreachable=0 failed=1  skipped=3  rescued=0 ignored=0 2026-04-01 00:51:03.696255 | orchestrator | testbed-node-2 : ok=17  changed=12  unreachable=0 failed=1  skipped=3  rescued=0 ignored=0 2026-04-01 00:51:03.696261 | orchestrator | 2026-04-01 00:51:03.696267 | orchestrator | 2026-04-01 00:51:03.696273 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 00:51:03.696278 | orchestrator | Wednesday 01 April 2026 00:51:00 +0000 (0:00:00.955) 0:00:32.119 ******* 2026-04-01 00:51:03.696283 | orchestrator | =============================================================================== 2026-04-01 00:51:03.696289 | orchestrator | Check RabbitMQ service -------------------------------------------------- 4.39s 2026-04-01 00:51:03.696295 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.60s 2026-04-01 00:51:03.696301 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.62s 2026-04-01 00:51:03.696307 | orchestrator | service-cert-copy : rabbitmq | Copying over extra CA certificates ------- 1.56s 2026-04-01 00:51:03.696313 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.54s 2026-04-01 00:51:03.696319 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.46s 2026-04-01 00:51:03.696325 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.40s 2026-04-01 00:51:03.696331 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.35s 2026-04-01 00:51:03.696336 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.31s 2026-04-01 00:51:03.696342 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.22s 2026-04-01 00:51:03.696347 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.20s 2026-04-01 00:51:03.696360 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.10s 2026-04-01 00:51:03.696365 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.04s 2026-04-01 00:51:03.696371 | orchestrator | service-check-containers : rabbitmq | Check containers ------------------ 1.03s 2026-04-01 00:51:03.696377 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 0.96s 2026-04-01 00:51:03.696387 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 0.79s 2026-04-01 00:51:03.696393 | orchestrator | service-check-containers : Include tasks -------------------------------- 0.78s 2026-04-01 00:51:03.696399 | orchestrator | rabbitmq : Get current RabbitMQ version --------------------------------- 0.69s 2026-04-01 00:51:03.696405 | orchestrator | service-cert-copy : rabbitmq | Copying over backend internal TLS key ---- 0.67s 2026-04-01 00:51:03.696415 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.66s 2026-04-01 00:51:03.696421 | orchestrator | 2026-04-01 00:51:03 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:51:06.735606 | orchestrator | 2026-04-01 00:51:06 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:51:06.738305 | orchestrator | 2026-04-01 00:51:06 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:51:06.740878 | orchestrator | 2026-04-01 00:51:06 | INFO  | Task 933db249-6878-4db3-9d15-994592505608 is in state STARTED 2026-04-01 00:51:06.741655 | orchestrator | 2026-04-01 00:51:06 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:51:09.775601 | orchestrator | 2026-04-01 00:51:09 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:51:09.775713 | orchestrator | 2026-04-01 00:51:09 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:51:09.776861 | orchestrator | 2026-04-01 00:51:09 | INFO  | Task 933db249-6878-4db3-9d15-994592505608 is in state STARTED 2026-04-01 00:51:09.776904 | orchestrator | 2026-04-01 00:51:09 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:51:12.827751 | orchestrator | 2026-04-01 00:51:12 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:51:12.828084 | orchestrator | 2026-04-01 00:51:12 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:51:12.829111 | orchestrator | 2026-04-01 00:51:12 | INFO  | Task 933db249-6878-4db3-9d15-994592505608 is in state STARTED 2026-04-01 00:51:12.829152 | orchestrator | 2026-04-01 00:51:12 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:51:15.860748 | orchestrator | 2026-04-01 00:51:15 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:51:15.863517 | orchestrator | 2026-04-01 00:51:15 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:51:15.866563 | orchestrator | 2026-04-01 00:51:15 | INFO  | Task 933db249-6878-4db3-9d15-994592505608 is in state STARTED 2026-04-01 00:51:15.867912 | orchestrator | 2026-04-01 00:51:15 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:51:18.898494 | orchestrator | 2026-04-01 00:51:18 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:51:18.898925 | orchestrator | 2026-04-01 00:51:18 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:51:18.899772 | orchestrator | 2026-04-01 00:51:18 | INFO  | Task 933db249-6878-4db3-9d15-994592505608 is in state STARTED 2026-04-01 00:51:18.899864 | orchestrator | 2026-04-01 00:51:18 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:51:21.930658 | orchestrator | 2026-04-01 00:51:21 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:51:21.930789 | orchestrator | 2026-04-01 00:51:21 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:51:21.933656 | orchestrator | 2026-04-01 00:51:21 | INFO  | Task 933db249-6878-4db3-9d15-994592505608 is in state STARTED 2026-04-01 00:51:21.933690 | orchestrator | 2026-04-01 00:51:21 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:51:24.978901 | orchestrator | 2026-04-01 00:51:24 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:51:24.981802 | orchestrator | 2026-04-01 00:51:24 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:51:24.982737 | orchestrator | 2026-04-01 00:51:24 | INFO  | Task 933db249-6878-4db3-9d15-994592505608 is in state STARTED 2026-04-01 00:51:24.982795 | orchestrator | 2026-04-01 00:51:24 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:51:28.028402 | orchestrator | 2026-04-01 00:51:28 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:51:28.029013 | orchestrator | 2026-04-01 00:51:28 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:51:28.031179 | orchestrator | 2026-04-01 00:51:28 | INFO  | Task 933db249-6878-4db3-9d15-994592505608 is in state STARTED 2026-04-01 00:51:28.031499 | orchestrator | 2026-04-01 00:51:28 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:51:31.067768 | orchestrator | 2026-04-01 00:51:31 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:51:31.067982 | orchestrator | 2026-04-01 00:51:31 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:51:31.068875 | orchestrator | 2026-04-01 00:51:31 | INFO  | Task 933db249-6878-4db3-9d15-994592505608 is in state STARTED 2026-04-01 00:51:31.068909 | orchestrator | 2026-04-01 00:51:31 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:51:34.109684 | orchestrator | 2026-04-01 00:51:34 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:51:34.109815 | orchestrator | 2026-04-01 00:51:34 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:51:34.109832 | orchestrator | 2026-04-01 00:51:34 | INFO  | Task 933db249-6878-4db3-9d15-994592505608 is in state STARTED 2026-04-01 00:51:34.109844 | orchestrator | 2026-04-01 00:51:34 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:51:37.150383 | orchestrator | 2026-04-01 00:51:37 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:51:37.151068 | orchestrator | 2026-04-01 00:51:37 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:51:37.152474 | orchestrator | 2026-04-01 00:51:37 | INFO  | Task 933db249-6878-4db3-9d15-994592505608 is in state STARTED 2026-04-01 00:51:37.152528 | orchestrator | 2026-04-01 00:51:37 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:51:40.188074 | orchestrator | 2026-04-01 00:51:40 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:51:40.223640 | orchestrator | 2026-04-01 00:51:40 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:51:40.224065 | orchestrator | 2026-04-01 00:51:40 | INFO  | Task 933db249-6878-4db3-9d15-994592505608 is in state STARTED 2026-04-01 00:51:40.224111 | orchestrator | 2026-04-01 00:51:40 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:51:43.257884 | orchestrator | 2026-04-01 00:51:43 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:51:43.258408 | orchestrator | 2026-04-01 00:51:43 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:51:43.259013 | orchestrator | 2026-04-01 00:51:43 | INFO  | Task 933db249-6878-4db3-9d15-994592505608 is in state STARTED 2026-04-01 00:51:43.260964 | orchestrator | 2026-04-01 00:51:43 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:51:46.301092 | orchestrator | 2026-04-01 00:51:46 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:51:46.302200 | orchestrator | 2026-04-01 00:51:46 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:51:46.304792 | orchestrator | 2026-04-01 00:51:46 | INFO  | Task 933db249-6878-4db3-9d15-994592505608 is in state STARTED 2026-04-01 00:51:46.304847 | orchestrator | 2026-04-01 00:51:46 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:51:49.356360 | orchestrator | 2026-04-01 00:51:49 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:51:49.357298 | orchestrator | 2026-04-01 00:51:49 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:51:49.358445 | orchestrator | 2026-04-01 00:51:49 | INFO  | Task 933db249-6878-4db3-9d15-994592505608 is in state STARTED 2026-04-01 00:51:49.358487 | orchestrator | 2026-04-01 00:51:49 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:51:52.647871 | orchestrator | 2026-04-01 00:51:52 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:51:52.648564 | orchestrator | 2026-04-01 00:51:52 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:51:52.649729 | orchestrator | 2026-04-01 00:51:52 | INFO  | Task 933db249-6878-4db3-9d15-994592505608 is in state STARTED 2026-04-01 00:51:52.649781 | orchestrator | 2026-04-01 00:51:52 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:51:55.757188 | orchestrator | 2026-04-01 00:51:55 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:51:55.763024 | orchestrator | 2026-04-01 00:51:55 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:51:55.763689 | orchestrator | 2026-04-01 00:51:55 | INFO  | Task 933db249-6878-4db3-9d15-994592505608 is in state STARTED 2026-04-01 00:51:55.763725 | orchestrator | 2026-04-01 00:51:55 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:51:58.805789 | orchestrator | 2026-04-01 00:51:58 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:51:58.807601 | orchestrator | 2026-04-01 00:51:58 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:51:58.808768 | orchestrator | 2026-04-01 00:51:58 | INFO  | Task 933db249-6878-4db3-9d15-994592505608 is in state STARTED 2026-04-01 00:51:58.809191 | orchestrator | 2026-04-01 00:51:58 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:52:01.857472 | orchestrator | 2026-04-01 00:52:01 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:52:01.860721 | orchestrator | 2026-04-01 00:52:01 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:52:01.860814 | orchestrator | 2026-04-01 00:52:01 | INFO  | Task 933db249-6878-4db3-9d15-994592505608 is in state STARTED 2026-04-01 00:52:01.860829 | orchestrator | 2026-04-01 00:52:01 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:52:04.913756 | orchestrator | 2026-04-01 00:52:04 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:52:04.916118 | orchestrator | 2026-04-01 00:52:04 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:52:04.918214 | orchestrator | 2026-04-01 00:52:04 | INFO  | Task 933db249-6878-4db3-9d15-994592505608 is in state STARTED 2026-04-01 00:52:04.918270 | orchestrator | 2026-04-01 00:52:04 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:52:07.946411 | orchestrator | 2026-04-01 00:52:07 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:52:07.949668 | orchestrator | 2026-04-01 00:52:07 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:52:07.951989 | orchestrator | 2026-04-01 00:52:07 | INFO  | Task 933db249-6878-4db3-9d15-994592505608 is in state STARTED 2026-04-01 00:52:07.952154 | orchestrator | 2026-04-01 00:52:07 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:52:10.998496 | orchestrator | 2026-04-01 00:52:11 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:52:11.000335 | orchestrator | 2026-04-01 00:52:11 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:52:11.000551 | orchestrator | 2026-04-01 00:52:11 | INFO  | Task 933db249-6878-4db3-9d15-994592505608 is in state STARTED 2026-04-01 00:52:11.000569 | orchestrator | 2026-04-01 00:52:11 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:52:14.025971 | orchestrator | 2026-04-01 00:52:14 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:52:14.026781 | orchestrator | 2026-04-01 00:52:14 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:52:14.030429 | orchestrator | 2026-04-01 00:52:14 | INFO  | Task 933db249-6878-4db3-9d15-994592505608 is in state STARTED 2026-04-01 00:52:14.030514 | orchestrator | 2026-04-01 00:52:14 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:52:17.068733 | orchestrator | 2026-04-01 00:52:17 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:52:17.070231 | orchestrator | 2026-04-01 00:52:17 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:52:17.071923 | orchestrator | 2026-04-01 00:52:17 | INFO  | Task 933db249-6878-4db3-9d15-994592505608 is in state STARTED 2026-04-01 00:52:17.072270 | orchestrator | 2026-04-01 00:52:17 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:52:20.105196 | orchestrator | 2026-04-01 00:52:20 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:52:20.105566 | orchestrator | 2026-04-01 00:52:20 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:52:20.106123 | orchestrator | 2026-04-01 00:52:20 | INFO  | Task 933db249-6878-4db3-9d15-994592505608 is in state STARTED 2026-04-01 00:52:20.106146 | orchestrator | 2026-04-01 00:52:20 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:52:23.153191 | orchestrator | 2026-04-01 00:52:23 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:52:23.155182 | orchestrator | 2026-04-01 00:52:23 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:52:23.157316 | orchestrator | 2026-04-01 00:52:23 | INFO  | Task 933db249-6878-4db3-9d15-994592505608 is in state STARTED 2026-04-01 00:52:23.157363 | orchestrator | 2026-04-01 00:52:23 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:52:26.199148 | orchestrator | 2026-04-01 00:52:26 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:52:26.202075 | orchestrator | 2026-04-01 00:52:26 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:52:26.203653 | orchestrator | 2026-04-01 00:52:26 | INFO  | Task 933db249-6878-4db3-9d15-994592505608 is in state STARTED 2026-04-01 00:52:26.203868 | orchestrator | 2026-04-01 00:52:26 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:52:29.247981 | orchestrator | 2026-04-01 00:52:29 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:52:29.248683 | orchestrator | 2026-04-01 00:52:29 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:52:29.249826 | orchestrator | 2026-04-01 00:52:29 | INFO  | Task 933db249-6878-4db3-9d15-994592505608 is in state STARTED 2026-04-01 00:52:29.249870 | orchestrator | 2026-04-01 00:52:29 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:52:32.300516 | orchestrator | 2026-04-01 00:52:32 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:52:32.306291 | orchestrator | 2026-04-01 00:52:32 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:52:32.311519 | orchestrator | 2026-04-01 00:52:32 | INFO  | Task 933db249-6878-4db3-9d15-994592505608 is in state STARTED 2026-04-01 00:52:32.311616 | orchestrator | 2026-04-01 00:52:32 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:52:35.353266 | orchestrator | 2026-04-01 00:52:35 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:52:35.353916 | orchestrator | 2026-04-01 00:52:35 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:52:35.355557 | orchestrator | 2026-04-01 00:52:35 | INFO  | Task 933db249-6878-4db3-9d15-994592505608 is in state STARTED 2026-04-01 00:52:35.355593 | orchestrator | 2026-04-01 00:52:35 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:52:38.398241 | orchestrator | 2026-04-01 00:52:38 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:52:38.398348 | orchestrator | 2026-04-01 00:52:38 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:52:38.398799 | orchestrator | 2026-04-01 00:52:38 | INFO  | Task 933db249-6878-4db3-9d15-994592505608 is in state STARTED 2026-04-01 00:52:38.398831 | orchestrator | 2026-04-01 00:52:38 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:52:41.435479 | orchestrator | 2026-04-01 00:52:41 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:52:41.436249 | orchestrator | 2026-04-01 00:52:41 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:52:41.438234 | orchestrator | 2026-04-01 00:52:41 | INFO  | Task 933db249-6878-4db3-9d15-994592505608 is in state STARTED 2026-04-01 00:52:41.438271 | orchestrator | 2026-04-01 00:52:41 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:52:44.478541 | orchestrator | 2026-04-01 00:52:44 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:52:44.478796 | orchestrator | 2026-04-01 00:52:44 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:52:44.479848 | orchestrator | 2026-04-01 00:52:44 | INFO  | Task 933db249-6878-4db3-9d15-994592505608 is in state STARTED 2026-04-01 00:52:44.479895 | orchestrator | 2026-04-01 00:52:44 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:52:47.525140 | orchestrator | 2026-04-01 00:52:47 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:52:47.527800 | orchestrator | 2026-04-01 00:52:47 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:52:47.529729 | orchestrator | 2026-04-01 00:52:47 | INFO  | Task 933db249-6878-4db3-9d15-994592505608 is in state STARTED 2026-04-01 00:52:47.529800 | orchestrator | 2026-04-01 00:52:47 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:52:50.571323 | orchestrator | 2026-04-01 00:52:50 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:52:50.574982 | orchestrator | 2026-04-01 00:52:50 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:52:50.575668 | orchestrator | 2026-04-01 00:52:50 | INFO  | Task 933db249-6878-4db3-9d15-994592505608 is in state STARTED 2026-04-01 00:52:50.575715 | orchestrator | 2026-04-01 00:52:50 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:52:53.610129 | orchestrator | 2026-04-01 00:52:53 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:52:53.610191 | orchestrator | 2026-04-01 00:52:53 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:52:53.610209 | orchestrator | 2026-04-01 00:52:53 | INFO  | Task 933db249-6878-4db3-9d15-994592505608 is in state STARTED 2026-04-01 00:52:53.610216 | orchestrator | 2026-04-01 00:52:53 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:52:56.652275 | orchestrator | 2026-04-01 00:52:56 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:52:56.652719 | orchestrator | 2026-04-01 00:52:56 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:52:56.654050 | orchestrator | 2026-04-01 00:52:56 | INFO  | Task 933db249-6878-4db3-9d15-994592505608 is in state STARTED 2026-04-01 00:52:56.654111 | orchestrator | 2026-04-01 00:52:56 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:52:59.703301 | orchestrator | 2026-04-01 00:52:59 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:52:59.705473 | orchestrator | 2026-04-01 00:52:59 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:52:59.706581 | orchestrator | 2026-04-01 00:52:59 | INFO  | Task 933db249-6878-4db3-9d15-994592505608 is in state STARTED 2026-04-01 00:52:59.706626 | orchestrator | 2026-04-01 00:52:59 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:53:02.743725 | orchestrator | 2026-04-01 00:53:02 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:53:02.744618 | orchestrator | 2026-04-01 00:53:02 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:53:02.745421 | orchestrator | 2026-04-01 00:53:02 | INFO  | Task 933db249-6878-4db3-9d15-994592505608 is in state STARTED 2026-04-01 00:53:02.745499 | orchestrator | 2026-04-01 00:53:02 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:53:05.788132 | orchestrator | 2026-04-01 00:53:05 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:53:05.790235 | orchestrator | 2026-04-01 00:53:05 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:53:05.794644 | orchestrator | 2026-04-01 00:53:05 | INFO  | Task 933db249-6878-4db3-9d15-994592505608 is in state STARTED 2026-04-01 00:53:05.794716 | orchestrator | 2026-04-01 00:53:05 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:53:08.836734 | orchestrator | 2026-04-01 00:53:08 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:53:08.837383 | orchestrator | 2026-04-01 00:53:08 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:53:08.839741 | orchestrator | 2026-04-01 00:53:08 | INFO  | Task 933db249-6878-4db3-9d15-994592505608 is in state STARTED 2026-04-01 00:53:08.839787 | orchestrator | 2026-04-01 00:53:08 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:53:11.875683 | orchestrator | 2026-04-01 00:53:11 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:53:11.875963 | orchestrator | 2026-04-01 00:53:11 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:53:11.876660 | orchestrator | 2026-04-01 00:53:11 | INFO  | Task 933db249-6878-4db3-9d15-994592505608 is in state STARTED 2026-04-01 00:53:11.876722 | orchestrator | 2026-04-01 00:53:11 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:53:15.059489 | orchestrator | 2026-04-01 00:53:15 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:53:15.059564 | orchestrator | 2026-04-01 00:53:15 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:53:15.059571 | orchestrator | 2026-04-01 00:53:15 | INFO  | Task 933db249-6878-4db3-9d15-994592505608 is in state STARTED 2026-04-01 00:53:15.059576 | orchestrator | 2026-04-01 00:53:15 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:53:18.045916 | orchestrator | 2026-04-01 00:53:18 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:53:18.046069 | orchestrator | 2026-04-01 00:53:18 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:53:18.046690 | orchestrator | 2026-04-01 00:53:18 | INFO  | Task 933db249-6878-4db3-9d15-994592505608 is in state STARTED 2026-04-01 00:53:18.046773 | orchestrator | 2026-04-01 00:53:18 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:53:21.073872 | orchestrator | 2026-04-01 00:53:21 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:53:21.074787 | orchestrator | 2026-04-01 00:53:21 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:53:21.075980 | orchestrator | 2026-04-01 00:53:21 | INFO  | Task 933db249-6878-4db3-9d15-994592505608 is in state STARTED 2026-04-01 00:53:21.076026 | orchestrator | 2026-04-01 00:53:21 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:53:24.110186 | orchestrator | 2026-04-01 00:53:24 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:53:24.112518 | orchestrator | 2026-04-01 00:53:24 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:53:24.115626 | orchestrator | 2026-04-01 00:53:24 | INFO  | Task 933db249-6878-4db3-9d15-994592505608 is in state STARTED 2026-04-01 00:53:24.115714 | orchestrator | 2026-04-01 00:53:24 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:53:27.150831 | orchestrator | 2026-04-01 00:53:27 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:53:27.150903 | orchestrator | 2026-04-01 00:53:27 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:53:27.152486 | orchestrator | 2026-04-01 00:53:27 | INFO  | Task b24c68d2-c6de-401f-a74a-9cfbd383f527 is in state STARTED 2026-04-01 00:53:27.154123 | orchestrator | 2026-04-01 00:53:27 | INFO  | Task a26ff3a5-fe7e-4dfc-ab8b-f11d8b67bea2 is in state STARTED 2026-04-01 00:53:27.156888 | orchestrator | 2026-04-01 00:53:27.156925 | orchestrator | 2026-04-01 00:53:27 | INFO  | Task 933db249-6878-4db3-9d15-994592505608 is in state SUCCESS 2026-04-01 00:53:27.158436 | orchestrator | 2026-04-01 00:53:27.158470 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-04-01 00:53:27.158476 | orchestrator | 2026-04-01 00:53:27.158480 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-04-01 00:53:27.158485 | orchestrator | Wednesday 01 April 2026 00:48:51 +0000 (0:00:00.271) 0:00:00.271 ******* 2026-04-01 00:53:27.158490 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:53:27.158495 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:53:27.158517 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:53:27.158521 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:53:27.158525 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:53:27.158529 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:53:27.158533 | orchestrator | 2026-04-01 00:53:27.158537 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-04-01 00:53:27.158541 | orchestrator | Wednesday 01 April 2026 00:48:52 +0000 (0:00:00.578) 0:00:00.850 ******* 2026-04-01 00:53:27.158545 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:53:27.158549 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:53:27.158553 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:53:27.158557 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:53:27.158561 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:53:27.158565 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:53:27.158569 | orchestrator | 2026-04-01 00:53:27.158573 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-04-01 00:53:27.158577 | orchestrator | Wednesday 01 April 2026 00:48:53 +0000 (0:00:00.705) 0:00:01.555 ******* 2026-04-01 00:53:27.158581 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:53:27.158584 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:53:27.158588 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:53:27.158592 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:53:27.158596 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:53:27.158600 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:53:27.158603 | orchestrator | 2026-04-01 00:53:27.158607 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-04-01 00:53:27.158611 | orchestrator | Wednesday 01 April 2026 00:48:53 +0000 (0:00:00.603) 0:00:02.159 ******* 2026-04-01 00:53:27.158615 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:53:27.158619 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:53:27.158623 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:53:27.158626 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:53:27.158630 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:53:27.158634 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:53:27.158638 | orchestrator | 2026-04-01 00:53:27.158641 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-04-01 00:53:27.158645 | orchestrator | Wednesday 01 April 2026 00:48:56 +0000 (0:00:02.294) 0:00:04.454 ******* 2026-04-01 00:53:27.158649 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:53:27.158653 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:53:27.158656 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:53:27.158660 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:53:27.158664 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:53:27.158668 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:53:27.158672 | orchestrator | 2026-04-01 00:53:27.158676 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-04-01 00:53:27.158680 | orchestrator | Wednesday 01 April 2026 00:48:56 +0000 (0:00:00.860) 0:00:05.314 ******* 2026-04-01 00:53:27.158684 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:53:27.158687 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:53:27.158691 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:53:27.158695 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:53:27.158699 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:53:27.158702 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:53:27.158706 | orchestrator | 2026-04-01 00:53:27.158731 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-04-01 00:53:27.158736 | orchestrator | Wednesday 01 April 2026 00:48:59 +0000 (0:00:02.549) 0:00:07.863 ******* 2026-04-01 00:53:27.158740 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:53:27.158744 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:53:27.158747 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:53:27.158751 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:53:27.158755 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:53:27.158763 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:53:27.158767 | orchestrator | 2026-04-01 00:53:27.158780 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-04-01 00:53:27.158785 | orchestrator | Wednesday 01 April 2026 00:49:00 +0000 (0:00:01.031) 0:00:08.895 ******* 2026-04-01 00:53:27.158788 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:53:27.158808 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:53:27.158812 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:53:27.158816 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:53:27.158819 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:53:27.158823 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:53:27.158827 | orchestrator | 2026-04-01 00:53:27.158831 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-04-01 00:53:27.158835 | orchestrator | Wednesday 01 April 2026 00:49:01 +0000 (0:00:00.680) 0:00:09.575 ******* 2026-04-01 00:53:27.158839 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-01 00:53:27.158843 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-01 00:53:27.158847 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:53:27.158851 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-01 00:53:27.158854 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-01 00:53:27.158858 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:53:27.158862 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-01 00:53:27.158866 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-01 00:53:27.158870 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:53:27.158873 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-01 00:53:27.158886 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-01 00:53:27.158890 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:53:27.158893 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-01 00:53:27.158897 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-01 00:53:27.158901 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:53:27.158905 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-01 00:53:27.158909 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-01 00:53:27.158912 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:53:27.158916 | orchestrator | 2026-04-01 00:53:27.158920 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-04-01 00:53:27.158924 | orchestrator | Wednesday 01 April 2026 00:49:02 +0000 (0:00:00.943) 0:00:10.519 ******* 2026-04-01 00:53:27.158928 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:53:27.158931 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:53:27.158935 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:53:27.158939 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:53:27.158943 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:53:27.158947 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:53:27.158950 | orchestrator | 2026-04-01 00:53:27.158954 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-04-01 00:53:27.158959 | orchestrator | Wednesday 01 April 2026 00:49:03 +0000 (0:00:01.174) 0:00:11.693 ******* 2026-04-01 00:53:27.158963 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:53:27.158967 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:53:27.158971 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:53:27.158974 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:53:27.158978 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:53:27.158982 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:53:27.158986 | orchestrator | 2026-04-01 00:53:27.158994 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-04-01 00:53:27.158998 | orchestrator | Wednesday 01 April 2026 00:49:04 +0000 (0:00:01.037) 0:00:12.731 ******* 2026-04-01 00:53:27.159002 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:53:27.159005 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:53:27.159009 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:53:27.159013 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:53:27.159017 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:53:27.159022 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:53:27.159027 | orchestrator | 2026-04-01 00:53:27.159031 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-04-01 00:53:27.159035 | orchestrator | Wednesday 01 April 2026 00:49:10 +0000 (0:00:06.660) 0:00:19.391 ******* 2026-04-01 00:53:27.159040 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:53:27.159044 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:53:27.159048 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:53:27.159053 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:53:27.159057 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:53:27.159061 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:53:27.159066 | orchestrator | 2026-04-01 00:53:27.159070 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-04-01 00:53:27.159075 | orchestrator | Wednesday 01 April 2026 00:49:12 +0000 (0:00:01.877) 0:00:21.268 ******* 2026-04-01 00:53:27.159080 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:53:27.159084 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:53:27.159088 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:53:27.159093 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:53:27.159097 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:53:27.159102 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:53:27.159106 | orchestrator | 2026-04-01 00:53:27.159111 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-04-01 00:53:27.159117 | orchestrator | Wednesday 01 April 2026 00:49:15 +0000 (0:00:02.346) 0:00:23.614 ******* 2026-04-01 00:53:27.159121 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:53:27.159126 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:53:27.159130 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:53:27.159135 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:53:27.159139 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:53:27.159147 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:53:27.159151 | orchestrator | 2026-04-01 00:53:27.159155 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-04-01 00:53:27.159159 | orchestrator | Wednesday 01 April 2026 00:49:16 +0000 (0:00:01.415) 0:00:25.029 ******* 2026-04-01 00:53:27.159163 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-04-01 00:53:27.159167 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-04-01 00:53:27.159171 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:53:27.159175 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-04-01 00:53:27.159179 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-04-01 00:53:27.159183 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:53:27.159186 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-04-01 00:53:27.159190 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-04-01 00:53:27.159194 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:53:27.159198 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-04-01 00:53:27.159202 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-04-01 00:53:27.159205 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:53:27.159209 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-04-01 00:53:27.159213 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-04-01 00:53:27.159217 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:53:27.159221 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-04-01 00:53:27.159228 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-04-01 00:53:27.159232 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:53:27.159236 | orchestrator | 2026-04-01 00:53:27.159240 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-04-01 00:53:27.159248 | orchestrator | Wednesday 01 April 2026 00:49:17 +0000 (0:00:00.960) 0:00:25.990 ******* 2026-04-01 00:53:27.159254 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:53:27.159260 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:53:27.159266 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:53:27.159271 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:53:27.159281 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:53:27.159289 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:53:27.159295 | orchestrator | 2026-04-01 00:53:27.159300 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-04-01 00:53:27.159307 | orchestrator | Wednesday 01 April 2026 00:49:18 +0000 (0:00:00.732) 0:00:26.722 ******* 2026-04-01 00:53:27.159312 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:53:27.159318 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:53:27.159324 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:53:27.159329 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:53:27.159335 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:53:27.159340 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:53:27.159346 | orchestrator | 2026-04-01 00:53:27.159351 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-04-01 00:53:27.159357 | orchestrator | 2026-04-01 00:53:27.159363 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-04-01 00:53:27.159369 | orchestrator | Wednesday 01 April 2026 00:49:19 +0000 (0:00:01.109) 0:00:27.832 ******* 2026-04-01 00:53:27.159376 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:53:27.159382 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:53:27.159387 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:53:27.159393 | orchestrator | 2026-04-01 00:53:27.159399 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-04-01 00:53:27.159405 | orchestrator | Wednesday 01 April 2026 00:49:20 +0000 (0:00:00.699) 0:00:28.532 ******* 2026-04-01 00:53:27.159411 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:53:27.159417 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:53:27.159423 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:53:27.159429 | orchestrator | 2026-04-01 00:53:27.159435 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-04-01 00:53:27.159440 | orchestrator | Wednesday 01 April 2026 00:49:21 +0000 (0:00:01.267) 0:00:29.799 ******* 2026-04-01 00:53:27.159446 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:53:27.159452 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:53:27.159457 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:53:27.159463 | orchestrator | 2026-04-01 00:53:27.159470 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-04-01 00:53:27.159476 | orchestrator | Wednesday 01 April 2026 00:49:22 +0000 (0:00:01.055) 0:00:30.854 ******* 2026-04-01 00:53:27.159482 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:53:27.159490 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:53:27.159499 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:53:27.159505 | orchestrator | 2026-04-01 00:53:27.159511 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-04-01 00:53:27.159517 | orchestrator | Wednesday 01 April 2026 00:49:23 +0000 (0:00:01.444) 0:00:32.299 ******* 2026-04-01 00:53:27.159523 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:53:27.159529 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:53:27.159535 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:53:27.159541 | orchestrator | 2026-04-01 00:53:27.159547 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-04-01 00:53:27.159553 | orchestrator | Wednesday 01 April 2026 00:49:24 +0000 (0:00:00.368) 0:00:32.667 ******* 2026-04-01 00:53:27.159567 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:53:27.159573 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:53:27.159580 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:53:27.159586 | orchestrator | 2026-04-01 00:53:27.159592 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-04-01 00:53:27.159599 | orchestrator | Wednesday 01 April 2026 00:49:25 +0000 (0:00:01.012) 0:00:33.680 ******* 2026-04-01 00:53:27.159605 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:53:27.159609 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:53:27.159613 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:53:27.159617 | orchestrator | 2026-04-01 00:53:27.159621 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-04-01 00:53:27.159625 | orchestrator | Wednesday 01 April 2026 00:49:26 +0000 (0:00:01.602) 0:00:35.283 ******* 2026-04-01 00:53:27.159633 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:53:27.159637 | orchestrator | 2026-04-01 00:53:27.159641 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-04-01 00:53:27.159645 | orchestrator | Wednesday 01 April 2026 00:49:27 +0000 (0:00:00.873) 0:00:36.157 ******* 2026-04-01 00:53:27.159649 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:53:27.159653 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:53:27.159656 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:53:27.159660 | orchestrator | 2026-04-01 00:53:27.159664 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-04-01 00:53:27.159668 | orchestrator | Wednesday 01 April 2026 00:49:30 +0000 (0:00:02.867) 0:00:39.025 ******* 2026-04-01 00:53:27.159672 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:53:27.159676 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:53:27.159680 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:53:27.159683 | orchestrator | 2026-04-01 00:53:27.159687 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-04-01 00:53:27.159691 | orchestrator | Wednesday 01 April 2026 00:49:31 +0000 (0:00:00.985) 0:00:40.010 ******* 2026-04-01 00:53:27.159695 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:53:27.159699 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:53:27.159703 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:53:27.159707 | orchestrator | 2026-04-01 00:53:27.159711 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-04-01 00:53:27.159717 | orchestrator | Wednesday 01 April 2026 00:49:32 +0000 (0:00:01.219) 0:00:41.229 ******* 2026-04-01 00:53:27.159723 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:53:27.159731 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:53:27.159738 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:53:27.159748 | orchestrator | 2026-04-01 00:53:27.159753 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-04-01 00:53:27.159764 | orchestrator | Wednesday 01 April 2026 00:49:34 +0000 (0:00:01.638) 0:00:42.868 ******* 2026-04-01 00:53:27.159770 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:53:27.159775 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:53:27.159782 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:53:27.159789 | orchestrator | 2026-04-01 00:53:27.159858 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-04-01 00:53:27.159872 | orchestrator | Wednesday 01 April 2026 00:49:34 +0000 (0:00:00.412) 0:00:43.280 ******* 2026-04-01 00:53:27.159878 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:53:27.159884 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:53:27.159890 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:53:27.159896 | orchestrator | 2026-04-01 00:53:27.159903 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-04-01 00:53:27.159907 | orchestrator | Wednesday 01 April 2026 00:49:35 +0000 (0:00:00.559) 0:00:43.840 ******* 2026-04-01 00:53:27.159911 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:53:27.159915 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:53:27.159924 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:53:27.159928 | orchestrator | 2026-04-01 00:53:27.159931 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-04-01 00:53:27.159935 | orchestrator | Wednesday 01 April 2026 00:49:37 +0000 (0:00:01.947) 0:00:45.787 ******* 2026-04-01 00:53:27.159939 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:53:27.159943 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:53:27.159947 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:53:27.159951 | orchestrator | 2026-04-01 00:53:27.159955 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-04-01 00:53:27.159958 | orchestrator | Wednesday 01 April 2026 00:49:39 +0000 (0:00:02.569) 0:00:48.357 ******* 2026-04-01 00:53:27.159962 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:53:27.159966 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:53:27.159970 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:53:27.159974 | orchestrator | 2026-04-01 00:53:27.159978 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-04-01 00:53:27.159982 | orchestrator | Wednesday 01 April 2026 00:49:40 +0000 (0:00:00.383) 0:00:48.740 ******* 2026-04-01 00:53:27.159986 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-04-01 00:53:27.159991 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-04-01 00:53:27.159995 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-04-01 00:53:27.159999 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-04-01 00:53:27.160003 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-04-01 00:53:27.160007 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-04-01 00:53:27.160010 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-04-01 00:53:27.160014 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-04-01 00:53:27.160018 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-04-01 00:53:27.160025 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-04-01 00:53:27.160029 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-04-01 00:53:27.160033 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-04-01 00:53:27.160037 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-04-01 00:53:27.160041 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-04-01 00:53:27.160045 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-04-01 00:53:27.160049 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:53:27.160053 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:53:27.160057 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:53:27.160060 | orchestrator | 2026-04-01 00:53:27.160064 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-04-01 00:53:27.160072 | orchestrator | Wednesday 01 April 2026 00:50:34 +0000 (0:00:53.743) 0:01:42.484 ******* 2026-04-01 00:53:27.160076 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:53:27.160080 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:53:27.160084 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:53:27.160087 | orchestrator | 2026-04-01 00:53:27.160091 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-04-01 00:53:27.160100 | orchestrator | Wednesday 01 April 2026 00:50:34 +0000 (0:00:00.439) 0:01:42.923 ******* 2026-04-01 00:53:27.160104 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:53:27.160108 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:53:27.160112 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:53:27.160116 | orchestrator | 2026-04-01 00:53:27.160119 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-04-01 00:53:27.160123 | orchestrator | Wednesday 01 April 2026 00:50:35 +0000 (0:00:01.150) 0:01:44.074 ******* 2026-04-01 00:53:27.160127 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:53:27.160131 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:53:27.160135 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:53:27.160139 | orchestrator | 2026-04-01 00:53:27.160143 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-04-01 00:53:27.160147 | orchestrator | Wednesday 01 April 2026 00:50:36 +0000 (0:00:01.320) 0:01:45.395 ******* 2026-04-01 00:53:27.160150 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:53:27.160154 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:53:27.160158 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:53:27.160162 | orchestrator | 2026-04-01 00:53:27.160166 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-04-01 00:53:27.160170 | orchestrator | Wednesday 01 April 2026 00:51:02 +0000 (0:00:25.558) 0:02:10.953 ******* 2026-04-01 00:53:27.160174 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:53:27.160178 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:53:27.160182 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:53:27.160185 | orchestrator | 2026-04-01 00:53:27.160189 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-04-01 00:53:27.160193 | orchestrator | Wednesday 01 April 2026 00:51:03 +0000 (0:00:00.679) 0:02:11.633 ******* 2026-04-01 00:53:27.160197 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:53:27.160201 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:53:27.160205 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:53:27.160209 | orchestrator | 2026-04-01 00:53:27.160213 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-04-01 00:53:27.160217 | orchestrator | Wednesday 01 April 2026 00:51:04 +0000 (0:00:00.903) 0:02:12.536 ******* 2026-04-01 00:53:27.160221 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:53:27.160224 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:53:27.160228 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:53:27.160232 | orchestrator | 2026-04-01 00:53:27.160236 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-04-01 00:53:27.160240 | orchestrator | Wednesday 01 April 2026 00:51:04 +0000 (0:00:00.692) 0:02:13.229 ******* 2026-04-01 00:53:27.160244 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:53:27.160248 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:53:27.160252 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:53:27.160255 | orchestrator | 2026-04-01 00:53:27.160259 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-04-01 00:53:27.160263 | orchestrator | Wednesday 01 April 2026 00:51:05 +0000 (0:00:00.678) 0:02:13.907 ******* 2026-04-01 00:53:27.160267 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:53:27.160271 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:53:27.160275 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:53:27.160279 | orchestrator | 2026-04-01 00:53:27.160282 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-04-01 00:53:27.160286 | orchestrator | Wednesday 01 April 2026 00:51:05 +0000 (0:00:00.304) 0:02:14.212 ******* 2026-04-01 00:53:27.160298 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:53:27.160302 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:53:27.160306 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:53:27.160310 | orchestrator | 2026-04-01 00:53:27.160314 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-04-01 00:53:27.160318 | orchestrator | Wednesday 01 April 2026 00:51:06 +0000 (0:00:00.628) 0:02:14.841 ******* 2026-04-01 00:53:27.160322 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:53:27.160326 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:53:27.160329 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:53:27.160333 | orchestrator | 2026-04-01 00:53:27.160337 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-04-01 00:53:27.160341 | orchestrator | Wednesday 01 April 2026 00:51:07 +0000 (0:00:00.769) 0:02:15.611 ******* 2026-04-01 00:53:27.160345 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:53:27.160349 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:53:27.160353 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:53:27.160356 | orchestrator | 2026-04-01 00:53:27.160363 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-04-01 00:53:27.160367 | orchestrator | Wednesday 01 April 2026 00:51:08 +0000 (0:00:00.877) 0:02:16.488 ******* 2026-04-01 00:53:27.160371 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:53:27.160375 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:53:27.160379 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:53:27.160383 | orchestrator | 2026-04-01 00:53:27.160386 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-04-01 00:53:27.160390 | orchestrator | Wednesday 01 April 2026 00:51:08 +0000 (0:00:00.867) 0:02:17.356 ******* 2026-04-01 00:53:27.160394 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:53:27.160398 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:53:27.160402 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:53:27.160406 | orchestrator | 2026-04-01 00:53:27.160410 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-04-01 00:53:27.160414 | orchestrator | Wednesday 01 April 2026 00:51:09 +0000 (0:00:00.287) 0:02:17.643 ******* 2026-04-01 00:53:27.160418 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:53:27.160421 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:53:27.160425 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:53:27.160429 | orchestrator | 2026-04-01 00:53:27.160433 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-04-01 00:53:27.160437 | orchestrator | Wednesday 01 April 2026 00:51:09 +0000 (0:00:00.462) 0:02:18.106 ******* 2026-04-01 00:53:27.160441 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:53:27.160445 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:53:27.160448 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:53:27.160452 | orchestrator | 2026-04-01 00:53:27.160456 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-04-01 00:53:27.160460 | orchestrator | Wednesday 01 April 2026 00:51:10 +0000 (0:00:00.714) 0:02:18.821 ******* 2026-04-01 00:53:27.160464 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:53:27.160471 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:53:27.160474 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:53:27.160478 | orchestrator | 2026-04-01 00:53:27.160482 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-04-01 00:53:27.160486 | orchestrator | Wednesday 01 April 2026 00:51:10 +0000 (0:00:00.616) 0:02:19.437 ******* 2026-04-01 00:53:27.160490 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-04-01 00:53:27.160494 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-04-01 00:53:27.160498 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-04-01 00:53:27.160502 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-04-01 00:53:27.160509 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-04-01 00:53:27.160513 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-04-01 00:53:27.160517 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-04-01 00:53:27.160521 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-04-01 00:53:27.160525 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-04-01 00:53:27.160529 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-04-01 00:53:27.160533 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-04-01 00:53:27.160537 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-04-01 00:53:27.160541 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-04-01 00:53:27.160544 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-04-01 00:53:27.160548 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-04-01 00:53:27.160552 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-04-01 00:53:27.160556 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-04-01 00:53:27.160560 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-04-01 00:53:27.160564 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-04-01 00:53:27.160568 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-04-01 00:53:27.160572 | orchestrator | 2026-04-01 00:53:27.160575 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-04-01 00:53:27.160579 | orchestrator | 2026-04-01 00:53:27.160583 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-04-01 00:53:27.160587 | orchestrator | Wednesday 01 April 2026 00:51:13 +0000 (0:00:02.925) 0:02:22.363 ******* 2026-04-01 00:53:27.160591 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:53:27.160595 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:53:27.160599 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:53:27.160603 | orchestrator | 2026-04-01 00:53:27.160607 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-04-01 00:53:27.160611 | orchestrator | Wednesday 01 April 2026 00:51:14 +0000 (0:00:00.282) 0:02:22.646 ******* 2026-04-01 00:53:27.160614 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:53:27.160618 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:53:27.160625 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:53:27.160629 | orchestrator | 2026-04-01 00:53:27.160632 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-04-01 00:53:27.160636 | orchestrator | Wednesday 01 April 2026 00:51:14 +0000 (0:00:00.663) 0:02:23.309 ******* 2026-04-01 00:53:27.160640 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:53:27.160644 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:53:27.160648 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:53:27.160652 | orchestrator | 2026-04-01 00:53:27.160656 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-04-01 00:53:27.160659 | orchestrator | Wednesday 01 April 2026 00:51:15 +0000 (0:00:00.295) 0:02:23.604 ******* 2026-04-01 00:53:27.160663 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:53:27.160667 | orchestrator | 2026-04-01 00:53:27.160671 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-04-01 00:53:27.160678 | orchestrator | Wednesday 01 April 2026 00:51:15 +0000 (0:00:00.522) 0:02:24.127 ******* 2026-04-01 00:53:27.160682 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:53:27.160686 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:53:27.160690 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:53:27.160694 | orchestrator | 2026-04-01 00:53:27.160698 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-04-01 00:53:27.160702 | orchestrator | Wednesday 01 April 2026 00:51:15 +0000 (0:00:00.292) 0:02:24.419 ******* 2026-04-01 00:53:27.160706 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:53:27.160710 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:53:27.160713 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:53:27.160717 | orchestrator | 2026-04-01 00:53:27.160721 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-04-01 00:53:27.160728 | orchestrator | Wednesday 01 April 2026 00:51:16 +0000 (0:00:00.264) 0:02:24.684 ******* 2026-04-01 00:53:27.160732 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:53:27.160736 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:53:27.160739 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:53:27.160743 | orchestrator | 2026-04-01 00:53:27.160747 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-04-01 00:53:27.160751 | orchestrator | Wednesday 01 April 2026 00:51:16 +0000 (0:00:00.374) 0:02:25.059 ******* 2026-04-01 00:53:27.160755 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:53:27.160759 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:53:27.160763 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:53:27.160766 | orchestrator | 2026-04-01 00:53:27.160770 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-04-01 00:53:27.160774 | orchestrator | Wednesday 01 April 2026 00:51:17 +0000 (0:00:00.690) 0:02:25.750 ******* 2026-04-01 00:53:27.160778 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:53:27.160782 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:53:27.160786 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:53:27.160804 | orchestrator | 2026-04-01 00:53:27.160808 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-04-01 00:53:27.160812 | orchestrator | Wednesday 01 April 2026 00:51:18 +0000 (0:00:01.326) 0:02:27.076 ******* 2026-04-01 00:53:27.160816 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:53:27.160820 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:53:27.160824 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:53:27.160828 | orchestrator | 2026-04-01 00:53:27.160832 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-04-01 00:53:27.160836 | orchestrator | Wednesday 01 April 2026 00:51:19 +0000 (0:00:01.278) 0:02:28.355 ******* 2026-04-01 00:53:27.160840 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:53:27.160844 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:53:27.160847 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:53:27.160851 | orchestrator | 2026-04-01 00:53:27.160855 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-04-01 00:53:27.160859 | orchestrator | 2026-04-01 00:53:27.160863 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-04-01 00:53:27.160867 | orchestrator | Wednesday 01 April 2026 00:51:30 +0000 (0:00:10.908) 0:02:39.263 ******* 2026-04-01 00:53:27.160871 | orchestrator | ok: [testbed-manager] 2026-04-01 00:53:27.160875 | orchestrator | 2026-04-01 00:53:27.160879 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-04-01 00:53:27.160883 | orchestrator | Wednesday 01 April 2026 00:51:31 +0000 (0:00:00.822) 0:02:40.086 ******* 2026-04-01 00:53:27.160886 | orchestrator | changed: [testbed-manager] 2026-04-01 00:53:27.160890 | orchestrator | 2026-04-01 00:53:27.160894 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-04-01 00:53:27.160898 | orchestrator | Wednesday 01 April 2026 00:51:32 +0000 (0:00:00.415) 0:02:40.501 ******* 2026-04-01 00:53:27.160902 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-04-01 00:53:27.160909 | orchestrator | 2026-04-01 00:53:27.160913 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-04-01 00:53:27.160917 | orchestrator | Wednesday 01 April 2026 00:51:32 +0000 (0:00:00.560) 0:02:41.061 ******* 2026-04-01 00:53:27.160921 | orchestrator | changed: [testbed-manager] 2026-04-01 00:53:27.160925 | orchestrator | 2026-04-01 00:53:27.160929 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-04-01 00:53:27.160932 | orchestrator | Wednesday 01 April 2026 00:51:33 +0000 (0:00:00.841) 0:02:41.902 ******* 2026-04-01 00:53:27.160936 | orchestrator | changed: [testbed-manager] 2026-04-01 00:53:27.160940 | orchestrator | 2026-04-01 00:53:27.160944 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-04-01 00:53:27.160948 | orchestrator | Wednesday 01 April 2026 00:51:34 +0000 (0:00:00.595) 0:02:42.498 ******* 2026-04-01 00:53:27.160952 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-01 00:53:27.160956 | orchestrator | 2026-04-01 00:53:27.160960 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-04-01 00:53:27.160964 | orchestrator | Wednesday 01 April 2026 00:51:36 +0000 (0:00:02.277) 0:02:44.775 ******* 2026-04-01 00:53:27.160968 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-01 00:53:27.160972 | orchestrator | 2026-04-01 00:53:27.160978 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-04-01 00:53:27.160982 | orchestrator | Wednesday 01 April 2026 00:51:37 +0000 (0:00:00.906) 0:02:45.682 ******* 2026-04-01 00:53:27.160985 | orchestrator | changed: [testbed-manager] 2026-04-01 00:53:27.160989 | orchestrator | 2026-04-01 00:53:27.160993 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-04-01 00:53:27.160997 | orchestrator | Wednesday 01 April 2026 00:51:37 +0000 (0:00:00.424) 0:02:46.106 ******* 2026-04-01 00:53:27.161001 | orchestrator | changed: [testbed-manager] 2026-04-01 00:53:27.161005 | orchestrator | 2026-04-01 00:53:27.161009 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-04-01 00:53:27.161012 | orchestrator | 2026-04-01 00:53:27.161016 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-04-01 00:53:27.161020 | orchestrator | Wednesday 01 April 2026 00:51:38 +0000 (0:00:00.474) 0:02:46.581 ******* 2026-04-01 00:53:27.161024 | orchestrator | ok: [testbed-manager] 2026-04-01 00:53:27.161028 | orchestrator | 2026-04-01 00:53:27.161032 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-04-01 00:53:27.161036 | orchestrator | Wednesday 01 April 2026 00:51:38 +0000 (0:00:00.164) 0:02:46.746 ******* 2026-04-01 00:53:27.161039 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-04-01 00:53:27.161043 | orchestrator | 2026-04-01 00:53:27.161047 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-04-01 00:53:27.161051 | orchestrator | Wednesday 01 April 2026 00:51:38 +0000 (0:00:00.229) 0:02:46.975 ******* 2026-04-01 00:53:27.161055 | orchestrator | ok: [testbed-manager] 2026-04-01 00:53:27.161059 | orchestrator | 2026-04-01 00:53:27.161063 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-04-01 00:53:27.161067 | orchestrator | Wednesday 01 April 2026 00:51:39 +0000 (0:00:00.773) 0:02:47.749 ******* 2026-04-01 00:53:27.161073 | orchestrator | ok: [testbed-manager] 2026-04-01 00:53:27.161077 | orchestrator | 2026-04-01 00:53:27.161081 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-04-01 00:53:27.161085 | orchestrator | Wednesday 01 April 2026 00:51:40 +0000 (0:00:01.266) 0:02:49.016 ******* 2026-04-01 00:53:27.161089 | orchestrator | changed: [testbed-manager] 2026-04-01 00:53:27.161092 | orchestrator | 2026-04-01 00:53:27.161096 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-04-01 00:53:27.161100 | orchestrator | Wednesday 01 April 2026 00:51:41 +0000 (0:00:00.844) 0:02:49.861 ******* 2026-04-01 00:53:27.161104 | orchestrator | ok: [testbed-manager] 2026-04-01 00:53:27.161108 | orchestrator | 2026-04-01 00:53:27.161115 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-04-01 00:53:27.161119 | orchestrator | Wednesday 01 April 2026 00:51:41 +0000 (0:00:00.404) 0:02:50.265 ******* 2026-04-01 00:53:27.161123 | orchestrator | changed: [testbed-manager] 2026-04-01 00:53:27.161126 | orchestrator | 2026-04-01 00:53:27.161130 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-04-01 00:53:27.161134 | orchestrator | Wednesday 01 April 2026 00:51:48 +0000 (0:00:06.191) 0:02:56.457 ******* 2026-04-01 00:53:27.161138 | orchestrator | changed: [testbed-manager] 2026-04-01 00:53:27.161142 | orchestrator | 2026-04-01 00:53:27.161146 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-04-01 00:53:27.161150 | orchestrator | Wednesday 01 April 2026 00:52:01 +0000 (0:00:13.045) 0:03:09.503 ******* 2026-04-01 00:53:27.161154 | orchestrator | ok: [testbed-manager] 2026-04-01 00:53:27.161158 | orchestrator | 2026-04-01 00:53:27.161162 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-04-01 00:53:27.161165 | orchestrator | 2026-04-01 00:53:27.161169 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-04-01 00:53:27.161173 | orchestrator | Wednesday 01 April 2026 00:52:01 +0000 (0:00:00.492) 0:03:09.995 ******* 2026-04-01 00:53:27.161177 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:53:27.161181 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:53:27.161185 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:53:27.161189 | orchestrator | 2026-04-01 00:53:27.161193 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-04-01 00:53:27.161197 | orchestrator | Wednesday 01 April 2026 00:52:01 +0000 (0:00:00.298) 0:03:10.294 ******* 2026-04-01 00:53:27.161201 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:53:27.161205 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:53:27.161208 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:53:27.161212 | orchestrator | 2026-04-01 00:53:27.161216 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-04-01 00:53:27.161220 | orchestrator | Wednesday 01 April 2026 00:52:02 +0000 (0:00:00.548) 0:03:10.842 ******* 2026-04-01 00:53:27.161224 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:53:27.161228 | orchestrator | 2026-04-01 00:53:27.161232 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-04-01 00:53:27.161236 | orchestrator | Wednesday 01 April 2026 00:52:03 +0000 (0:00:00.661) 0:03:11.503 ******* 2026-04-01 00:53:27.161239 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-01 00:53:27.161243 | orchestrator | 2026-04-01 00:53:27.161247 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-04-01 00:53:27.161251 | orchestrator | Wednesday 01 April 2026 00:52:04 +0000 (0:00:00.996) 0:03:12.499 ******* 2026-04-01 00:53:27.161255 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-01 00:53:27.161259 | orchestrator | 2026-04-01 00:53:27.161263 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-04-01 00:53:27.161267 | orchestrator | Wednesday 01 April 2026 00:52:05 +0000 (0:00:00.975) 0:03:13.475 ******* 2026-04-01 00:53:27.161270 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:53:27.161274 | orchestrator | 2026-04-01 00:53:27.161278 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-04-01 00:53:27.161282 | orchestrator | Wednesday 01 April 2026 00:52:05 +0000 (0:00:00.104) 0:03:13.580 ******* 2026-04-01 00:53:27.161286 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-01 00:53:27.161290 | orchestrator | 2026-04-01 00:53:27.161296 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-04-01 00:53:27.161300 | orchestrator | Wednesday 01 April 2026 00:52:06 +0000 (0:00:01.001) 0:03:14.581 ******* 2026-04-01 00:53:27.161304 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:53:27.161308 | orchestrator | 2026-04-01 00:53:27.161312 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-04-01 00:53:27.161319 | orchestrator | Wednesday 01 April 2026 00:52:06 +0000 (0:00:00.111) 0:03:14.693 ******* 2026-04-01 00:53:27.161323 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:53:27.161327 | orchestrator | 2026-04-01 00:53:27.161331 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-04-01 00:53:27.161334 | orchestrator | Wednesday 01 April 2026 00:52:06 +0000 (0:00:00.109) 0:03:14.802 ******* 2026-04-01 00:53:27.161338 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:53:27.161342 | orchestrator | 2026-04-01 00:53:27.161346 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-04-01 00:53:27.161350 | orchestrator | Wednesday 01 April 2026 00:52:06 +0000 (0:00:00.218) 0:03:15.021 ******* 2026-04-01 00:53:27.161354 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:53:27.161358 | orchestrator | 2026-04-01 00:53:27.161361 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-04-01 00:53:27.161365 | orchestrator | Wednesday 01 April 2026 00:52:06 +0000 (0:00:00.106) 0:03:15.127 ******* 2026-04-01 00:53:27.161369 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-01 00:53:27.161373 | orchestrator | 2026-04-01 00:53:27.161377 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-04-01 00:53:27.161381 | orchestrator | Wednesday 01 April 2026 00:52:11 +0000 (0:00:04.676) 0:03:19.803 ******* 2026-04-01 00:53:27.161385 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-04-01 00:53:27.161391 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2026-04-01 00:53:27.161395 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-04-01 00:53:27.161399 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-04-01 00:53:27.161403 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-04-01 00:53:27.161407 | orchestrator | 2026-04-01 00:53:27.161411 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-04-01 00:53:27.161415 | orchestrator | Wednesday 01 April 2026 00:52:58 +0000 (0:00:46.843) 0:04:06.646 ******* 2026-04-01 00:53:27.161419 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-01 00:53:27.161422 | orchestrator | 2026-04-01 00:53:27.161426 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-04-01 00:53:27.161430 | orchestrator | Wednesday 01 April 2026 00:52:59 +0000 (0:00:01.178) 0:04:07.825 ******* 2026-04-01 00:53:27.161434 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-01 00:53:27.161438 | orchestrator | 2026-04-01 00:53:27.161442 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-04-01 00:53:27.161446 | orchestrator | Wednesday 01 April 2026 00:53:01 +0000 (0:00:01.642) 0:04:09.468 ******* 2026-04-01 00:53:27.161450 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-01 00:53:27.161453 | orchestrator | 2026-04-01 00:53:27.161457 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-04-01 00:53:27.161461 | orchestrator | Wednesday 01 April 2026 00:53:02 +0000 (0:00:01.108) 0:04:10.576 ******* 2026-04-01 00:53:27.161465 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:53:27.161469 | orchestrator | 2026-04-01 00:53:27.161473 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-04-01 00:53:27.161477 | orchestrator | Wednesday 01 April 2026 00:53:02 +0000 (0:00:00.130) 0:04:10.707 ******* 2026-04-01 00:53:27.161481 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-04-01 00:53:27.161485 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-04-01 00:53:27.161488 | orchestrator | 2026-04-01 00:53:27.161492 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-04-01 00:53:27.161496 | orchestrator | Wednesday 01 April 2026 00:53:04 +0000 (0:00:01.988) 0:04:12.696 ******* 2026-04-01 00:53:27.161500 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:53:27.161507 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:53:27.161511 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:53:27.161514 | orchestrator | 2026-04-01 00:53:27.161518 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-04-01 00:53:27.161522 | orchestrator | Wednesday 01 April 2026 00:53:04 +0000 (0:00:00.301) 0:04:12.998 ******* 2026-04-01 00:53:27.161526 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:53:27.161530 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:53:27.161534 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:53:27.161538 | orchestrator | 2026-04-01 00:53:27.161542 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-04-01 00:53:27.161546 | orchestrator | 2026-04-01 00:53:27.161549 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-04-01 00:53:27.161554 | orchestrator | Wednesday 01 April 2026 00:53:05 +0000 (0:00:01.284) 0:04:14.282 ******* 2026-04-01 00:53:27.161558 | orchestrator | ok: [testbed-manager] 2026-04-01 00:53:27.161561 | orchestrator | 2026-04-01 00:53:27.161565 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-04-01 00:53:27.161569 | orchestrator | Wednesday 01 April 2026 00:53:05 +0000 (0:00:00.148) 0:04:14.431 ******* 2026-04-01 00:53:27.161573 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-04-01 00:53:27.161577 | orchestrator | 2026-04-01 00:53:27.161581 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-04-01 00:53:27.161585 | orchestrator | Wednesday 01 April 2026 00:53:06 +0000 (0:00:00.231) 0:04:14.662 ******* 2026-04-01 00:53:27.161589 | orchestrator | changed: [testbed-manager] 2026-04-01 00:53:27.161592 | orchestrator | 2026-04-01 00:53:27.161596 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-04-01 00:53:27.161600 | orchestrator | 2026-04-01 00:53:27.161606 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-04-01 00:53:27.161610 | orchestrator | Wednesday 01 April 2026 00:53:11 +0000 (0:00:04.881) 0:04:19.544 ******* 2026-04-01 00:53:27.161614 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:53:27.161618 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:53:27.161622 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:53:27.161626 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:53:27.161630 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:53:27.161634 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:53:27.161637 | orchestrator | 2026-04-01 00:53:27.161641 | orchestrator | TASK [Manage labels] *********************************************************** 2026-04-01 00:53:27.161645 | orchestrator | Wednesday 01 April 2026 00:53:11 +0000 (0:00:00.652) 0:04:20.196 ******* 2026-04-01 00:53:27.161649 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-04-01 00:53:27.161653 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-04-01 00:53:27.161657 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-04-01 00:53:27.161661 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-04-01 00:53:27.161665 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-04-01 00:53:27.161668 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-04-01 00:53:27.161672 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-04-01 00:53:27.161676 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-04-01 00:53:27.161683 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-04-01 00:53:27.161687 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-04-01 00:53:27.161691 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-04-01 00:53:27.161695 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-04-01 00:53:27.161701 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-04-01 00:53:27.161705 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-04-01 00:53:27.161709 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-04-01 00:53:27.161713 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-04-01 00:53:27.161717 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-04-01 00:53:27.161720 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-04-01 00:53:27.161724 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-04-01 00:53:27.161728 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-04-01 00:53:27.161732 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-04-01 00:53:27.161736 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-04-01 00:53:27.161740 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-04-01 00:53:27.161744 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-04-01 00:53:27.161747 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-04-01 00:53:27.161751 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-04-01 00:53:27.161755 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-04-01 00:53:27.161759 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-04-01 00:53:27.161763 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-04-01 00:53:27.161767 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-04-01 00:53:27.161771 | orchestrator | 2026-04-01 00:53:27.161774 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-04-01 00:53:27.161778 | orchestrator | Wednesday 01 April 2026 00:53:23 +0000 (0:00:11.399) 0:04:31.596 ******* 2026-04-01 00:53:27.161782 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:53:27.161786 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:53:27.161821 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:53:27.161825 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:53:27.161829 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:53:27.161833 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:53:27.161837 | orchestrator | 2026-04-01 00:53:27.161841 | orchestrator | TASK [Manage taints] *********************************************************** 2026-04-01 00:53:27.161845 | orchestrator | Wednesday 01 April 2026 00:53:23 +0000 (0:00:00.587) 0:04:32.183 ******* 2026-04-01 00:53:27.161849 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:53:27.161853 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:53:27.161856 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:53:27.161860 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:53:27.161864 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:53:27.161868 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:53:27.161872 | orchestrator | 2026-04-01 00:53:27.161878 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 00:53:27.161883 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 00:53:27.161888 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-04-01 00:53:27.161892 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-04-01 00:53:27.161900 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-04-01 00:53:27.161904 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-01 00:53:27.161908 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-01 00:53:27.161912 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-01 00:53:27.161915 | orchestrator | 2026-04-01 00:53:27.161919 | orchestrator | 2026-04-01 00:53:27.161923 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 00:53:27.161930 | orchestrator | Wednesday 01 April 2026 00:53:24 +0000 (0:00:00.389) 0:04:32.573 ******* 2026-04-01 00:53:27.161934 | orchestrator | =============================================================================== 2026-04-01 00:53:27.161938 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 53.74s 2026-04-01 00:53:27.161942 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 46.84s 2026-04-01 00:53:27.161946 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 25.56s 2026-04-01 00:53:27.161950 | orchestrator | kubectl : Install required packages ------------------------------------ 13.05s 2026-04-01 00:53:27.161954 | orchestrator | Manage labels ---------------------------------------------------------- 11.40s 2026-04-01 00:53:27.161958 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 10.91s 2026-04-01 00:53:27.161961 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 6.66s 2026-04-01 00:53:27.161965 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 6.19s 2026-04-01 00:53:27.161969 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 4.88s 2026-04-01 00:53:27.161973 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 4.68s 2026-04-01 00:53:27.161977 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 2.93s 2026-04-01 00:53:27.161981 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.87s 2026-04-01 00:53:27.161985 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 2.57s 2026-04-01 00:53:27.161989 | orchestrator | k3s_prereq : Enable IPv6 router advertisements -------------------------- 2.55s 2026-04-01 00:53:27.161992 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 2.35s 2026-04-01 00:53:27.161996 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.30s 2026-04-01 00:53:27.162000 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 2.28s 2026-04-01 00:53:27.162004 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 1.99s 2026-04-01 00:53:27.162008 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 1.95s 2026-04-01 00:53:27.162054 | orchestrator | k3s_download : Download k3s binary arm64 -------------------------------- 1.88s 2026-04-01 00:53:27.162059 | orchestrator | 2026-04-01 00:53:27 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:53:30.201077 | orchestrator | 2026-04-01 00:53:30 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:53:30.203627 | orchestrator | 2026-04-01 00:53:30 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:53:30.207401 | orchestrator | 2026-04-01 00:53:30 | INFO  | Task b24c68d2-c6de-401f-a74a-9cfbd383f527 is in state STARTED 2026-04-01 00:53:30.208129 | orchestrator | 2026-04-01 00:53:30 | INFO  | Task a26ff3a5-fe7e-4dfc-ab8b-f11d8b67bea2 is in state STARTED 2026-04-01 00:53:30.208147 | orchestrator | 2026-04-01 00:53:30 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:53:33.252174 | orchestrator | 2026-04-01 00:53:33 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:53:33.252481 | orchestrator | 2026-04-01 00:53:33 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:53:33.252945 | orchestrator | 2026-04-01 00:53:33 | INFO  | Task b24c68d2-c6de-401f-a74a-9cfbd383f527 is in state SUCCESS 2026-04-01 00:53:33.253679 | orchestrator | 2026-04-01 00:53:33 | INFO  | Task a26ff3a5-fe7e-4dfc-ab8b-f11d8b67bea2 is in state STARTED 2026-04-01 00:53:33.253703 | orchestrator | 2026-04-01 00:53:33 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:53:36.296757 | orchestrator | 2026-04-01 00:53:36 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:53:36.296835 | orchestrator | 2026-04-01 00:53:36 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:53:36.297308 | orchestrator | 2026-04-01 00:53:36 | INFO  | Task a26ff3a5-fe7e-4dfc-ab8b-f11d8b67bea2 is in state SUCCESS 2026-04-01 00:53:36.297322 | orchestrator | 2026-04-01 00:53:36 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:53:39.335684 | orchestrator | 2026-04-01 00:53:39 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:53:39.337521 | orchestrator | 2026-04-01 00:53:39 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:53:39.337963 | orchestrator | 2026-04-01 00:53:39 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:53:42.379347 | orchestrator | 2026-04-01 00:53:42 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:53:42.381349 | orchestrator | 2026-04-01 00:53:42 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:53:42.381429 | orchestrator | 2026-04-01 00:53:42 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:53:45.425338 | orchestrator | 2026-04-01 00:53:45 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:53:45.426545 | orchestrator | 2026-04-01 00:53:45 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:53:45.426618 | orchestrator | 2026-04-01 00:53:45 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:53:48.469114 | orchestrator | 2026-04-01 00:53:48 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:53:48.470886 | orchestrator | 2026-04-01 00:53:48 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:53:48.470988 | orchestrator | 2026-04-01 00:53:48 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:53:51.504199 | orchestrator | 2026-04-01 00:53:51 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:53:51.506168 | orchestrator | 2026-04-01 00:53:51 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:53:51.506248 | orchestrator | 2026-04-01 00:53:51 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:53:54.550324 | orchestrator | 2026-04-01 00:53:54 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:53:54.550560 | orchestrator | 2026-04-01 00:53:54 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:53:54.551011 | orchestrator | 2026-04-01 00:53:54 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:53:57.595216 | orchestrator | 2026-04-01 00:53:57 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:53:57.596534 | orchestrator | 2026-04-01 00:53:57 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:53:57.596582 | orchestrator | 2026-04-01 00:53:57 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:54:00.635808 | orchestrator | 2026-04-01 00:54:00 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:54:00.636939 | orchestrator | 2026-04-01 00:54:00 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:54:00.636978 | orchestrator | 2026-04-01 00:54:00 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:54:03.682833 | orchestrator | 2026-04-01 00:54:03 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:54:03.683229 | orchestrator | 2026-04-01 00:54:03 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:54:03.683749 | orchestrator | 2026-04-01 00:54:03 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:54:06.724891 | orchestrator | 2026-04-01 00:54:06 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:54:06.726515 | orchestrator | 2026-04-01 00:54:06 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:54:06.726577 | orchestrator | 2026-04-01 00:54:06 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:54:09.775660 | orchestrator | 2026-04-01 00:54:09 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:54:09.777629 | orchestrator | 2026-04-01 00:54:09 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:54:09.777729 | orchestrator | 2026-04-01 00:54:09 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:54:12.821286 | orchestrator | 2026-04-01 00:54:12 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:54:12.821999 | orchestrator | 2026-04-01 00:54:12 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:54:12.822070 | orchestrator | 2026-04-01 00:54:12 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:54:15.883650 | orchestrator | 2026-04-01 00:54:15 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:54:15.884076 | orchestrator | 2026-04-01 00:54:15 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:54:15.884173 | orchestrator | 2026-04-01 00:54:15 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:54:18.926276 | orchestrator | 2026-04-01 00:54:18 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:54:18.926725 | orchestrator | 2026-04-01 00:54:18 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:54:18.926817 | orchestrator | 2026-04-01 00:54:18 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:54:21.961782 | orchestrator | 2026-04-01 00:54:21 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:54:21.964240 | orchestrator | 2026-04-01 00:54:21 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:54:21.964322 | orchestrator | 2026-04-01 00:54:21 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:54:25.022297 | orchestrator | 2026-04-01 00:54:25 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:54:25.022395 | orchestrator | 2026-04-01 00:54:25 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:54:25.022433 | orchestrator | 2026-04-01 00:54:25 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:54:28.054209 | orchestrator | 2026-04-01 00:54:28 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:54:28.056249 | orchestrator | 2026-04-01 00:54:28 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:54:28.056308 | orchestrator | 2026-04-01 00:54:28 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:54:31.108176 | orchestrator | 2026-04-01 00:54:31 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:54:31.109833 | orchestrator | 2026-04-01 00:54:31 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:54:31.109909 | orchestrator | 2026-04-01 00:54:31 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:54:34.155309 | orchestrator | 2026-04-01 00:54:34 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:54:34.158373 | orchestrator | 2026-04-01 00:54:34 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:54:34.159520 | orchestrator | 2026-04-01 00:54:34 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:54:37.199920 | orchestrator | 2026-04-01 00:54:37 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:54:37.201923 | orchestrator | 2026-04-01 00:54:37 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:54:37.201998 | orchestrator | 2026-04-01 00:54:37 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:54:40.236581 | orchestrator | 2026-04-01 00:54:40 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:54:40.236976 | orchestrator | 2026-04-01 00:54:40 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:54:40.237008 | orchestrator | 2026-04-01 00:54:40 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:54:43.275575 | orchestrator | 2026-04-01 00:54:43 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:54:43.277737 | orchestrator | 2026-04-01 00:54:43 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:54:43.278213 | orchestrator | 2026-04-01 00:54:43 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:54:46.314430 | orchestrator | 2026-04-01 00:54:46 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:54:46.315984 | orchestrator | 2026-04-01 00:54:46 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:54:46.316058 | orchestrator | 2026-04-01 00:54:46 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:54:49.357764 | orchestrator | 2026-04-01 00:54:49 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:54:49.359415 | orchestrator | 2026-04-01 00:54:49 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:54:49.359496 | orchestrator | 2026-04-01 00:54:49 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:54:52.396094 | orchestrator | 2026-04-01 00:54:52 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:54:52.400524 | orchestrator | 2026-04-01 00:54:52 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:54:52.400613 | orchestrator | 2026-04-01 00:54:52 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:54:55.436341 | orchestrator | 2026-04-01 00:54:55 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:54:55.437649 | orchestrator | 2026-04-01 00:54:55 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:54:55.437683 | orchestrator | 2026-04-01 00:54:55 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:54:58.483814 | orchestrator | 2026-04-01 00:54:58 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:54:58.485613 | orchestrator | 2026-04-01 00:54:58 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:54:58.485656 | orchestrator | 2026-04-01 00:54:58 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:55:01.525964 | orchestrator | 2026-04-01 00:55:01 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:55:01.527434 | orchestrator | 2026-04-01 00:55:01 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:55:01.528102 | orchestrator | 2026-04-01 00:55:01 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:55:04.575371 | orchestrator | 2026-04-01 00:55:04 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:55:04.576799 | orchestrator | 2026-04-01 00:55:04 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:55:04.576854 | orchestrator | 2026-04-01 00:55:04 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:55:07.620893 | orchestrator | 2026-04-01 00:55:07 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:55:07.623422 | orchestrator | 2026-04-01 00:55:07 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:55:07.623534 | orchestrator | 2026-04-01 00:55:07 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:55:10.659190 | orchestrator | 2026-04-01 00:55:10 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:55:10.659588 | orchestrator | 2026-04-01 00:55:10 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:55:10.659610 | orchestrator | 2026-04-01 00:55:10 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:55:13.700302 | orchestrator | 2026-04-01 00:55:13 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:55:13.701715 | orchestrator | 2026-04-01 00:55:13 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:55:13.701992 | orchestrator | 2026-04-01 00:55:13 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:55:16.730714 | orchestrator | 2026-04-01 00:55:16 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:55:16.733245 | orchestrator | 2026-04-01 00:55:16 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:55:16.733283 | orchestrator | 2026-04-01 00:55:16 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:55:19.762168 | orchestrator | 2026-04-01 00:55:19 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:55:19.762229 | orchestrator | 2026-04-01 00:55:19 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:55:19.762237 | orchestrator | 2026-04-01 00:55:19 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:55:22.791607 | orchestrator | 2026-04-01 00:55:22 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:55:22.792589 | orchestrator | 2026-04-01 00:55:22 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:55:22.792628 | orchestrator | 2026-04-01 00:55:22 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:55:25.838766 | orchestrator | 2026-04-01 00:55:25 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:55:25.840582 | orchestrator | 2026-04-01 00:55:25 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:55:25.840637 | orchestrator | 2026-04-01 00:55:25 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:55:28.880665 | orchestrator | 2026-04-01 00:55:28 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:55:28.880991 | orchestrator | 2026-04-01 00:55:28 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:55:28.881084 | orchestrator | 2026-04-01 00:55:28 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:55:31.910102 | orchestrator | 2026-04-01 00:55:31 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:55:31.911170 | orchestrator | 2026-04-01 00:55:31 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:55:31.911400 | orchestrator | 2026-04-01 00:55:31 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:55:34.949596 | orchestrator | 2026-04-01 00:55:34 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:55:34.949686 | orchestrator | 2026-04-01 00:55:34 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:55:34.949757 | orchestrator | 2026-04-01 00:55:34 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:55:37.981691 | orchestrator | 2026-04-01 00:55:37 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:55:37.982315 | orchestrator | 2026-04-01 00:55:37 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:55:37.982413 | orchestrator | 2026-04-01 00:55:37 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:55:41.019836 | orchestrator | 2026-04-01 00:55:41 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state STARTED 2026-04-01 00:55:41.021363 | orchestrator | 2026-04-01 00:55:41 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:55:41.022314 | orchestrator | 2026-04-01 00:55:41 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:55:44.061370 | orchestrator | 2026-04-01 00:55:44 | INFO  | Task f153c2b9-e292-4a37-a796-0fb954d2066b is in state STARTED 2026-04-01 00:55:44.066657 | orchestrator | 2026-04-01 00:55:44 | INFO  | Task ef073029-b69f-4c7d-87f3-f4a2f64284db is in state STARTED 2026-04-01 00:55:44.075763 | orchestrator | 2026-04-01 00:55:44 | INFO  | Task bf5536b6-4451-40e2-86f4-f991369beeab is in state SUCCESS 2026-04-01 00:55:44.077890 | orchestrator | 2026-04-01 00:55:44.077968 | orchestrator | 2026-04-01 00:55:44.077981 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-04-01 00:55:44.077991 | orchestrator | 2026-04-01 00:55:44.077998 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-04-01 00:55:44.078006 | orchestrator | Wednesday 01 April 2026 00:53:27 +0000 (0:00:00.249) 0:00:00.249 ******* 2026-04-01 00:55:44.078058 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-04-01 00:55:44.078066 | orchestrator | 2026-04-01 00:55:44.078073 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-04-01 00:55:44.078080 | orchestrator | Wednesday 01 April 2026 00:53:28 +0000 (0:00:01.070) 0:00:01.319 ******* 2026-04-01 00:55:44.078087 | orchestrator | changed: [testbed-manager] 2026-04-01 00:55:44.078094 | orchestrator | 2026-04-01 00:55:44.078101 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-04-01 00:55:44.078107 | orchestrator | Wednesday 01 April 2026 00:53:30 +0000 (0:00:01.512) 0:00:02.832 ******* 2026-04-01 00:55:44.078113 | orchestrator | changed: [testbed-manager] 2026-04-01 00:55:44.078166 | orchestrator | 2026-04-01 00:55:44.078233 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 00:55:44.078255 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 00:55:44.078264 | orchestrator | 2026-04-01 00:55:44.078271 | orchestrator | 2026-04-01 00:55:44.078277 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 00:55:44.078284 | orchestrator | Wednesday 01 April 2026 00:53:30 +0000 (0:00:00.556) 0:00:03.389 ******* 2026-04-01 00:55:44.078291 | orchestrator | =============================================================================== 2026-04-01 00:55:44.078298 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.51s 2026-04-01 00:55:44.078305 | orchestrator | Get kubeconfig file ----------------------------------------------------- 1.07s 2026-04-01 00:55:44.078312 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.56s 2026-04-01 00:55:44.078318 | orchestrator | 2026-04-01 00:55:44.078367 | orchestrator | 2026-04-01 00:55:44.078378 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-04-01 00:55:44.078385 | orchestrator | 2026-04-01 00:55:44.078448 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-04-01 00:55:44.078457 | orchestrator | Wednesday 01 April 2026 00:53:27 +0000 (0:00:00.264) 0:00:00.264 ******* 2026-04-01 00:55:44.078465 | orchestrator | ok: [testbed-manager] 2026-04-01 00:55:44.078474 | orchestrator | 2026-04-01 00:55:44.078483 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-04-01 00:55:44.078529 | orchestrator | Wednesday 01 April 2026 00:53:28 +0000 (0:00:00.991) 0:00:01.255 ******* 2026-04-01 00:55:44.078538 | orchestrator | ok: [testbed-manager] 2026-04-01 00:55:44.078546 | orchestrator | 2026-04-01 00:55:44.078553 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-04-01 00:55:44.078561 | orchestrator | Wednesday 01 April 2026 00:53:29 +0000 (0:00:00.615) 0:00:01.871 ******* 2026-04-01 00:55:44.078568 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-04-01 00:55:44.078662 | orchestrator | 2026-04-01 00:55:44.078672 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-04-01 00:55:44.078679 | orchestrator | Wednesday 01 April 2026 00:53:30 +0000 (0:00:00.985) 0:00:02.857 ******* 2026-04-01 00:55:44.078686 | orchestrator | changed: [testbed-manager] 2026-04-01 00:55:44.078694 | orchestrator | 2026-04-01 00:55:44.078702 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-04-01 00:55:44.078710 | orchestrator | Wednesday 01 April 2026 00:53:31 +0000 (0:00:01.228) 0:00:04.085 ******* 2026-04-01 00:55:44.078718 | orchestrator | changed: [testbed-manager] 2026-04-01 00:55:44.078724 | orchestrator | 2026-04-01 00:55:44.078732 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-04-01 00:55:44.078739 | orchestrator | Wednesday 01 April 2026 00:53:31 +0000 (0:00:00.620) 0:00:04.706 ******* 2026-04-01 00:55:44.078748 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-01 00:55:44.078756 | orchestrator | 2026-04-01 00:55:44.078763 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-04-01 00:55:44.078770 | orchestrator | Wednesday 01 April 2026 00:53:33 +0000 (0:00:01.910) 0:00:06.617 ******* 2026-04-01 00:55:44.078779 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-01 00:55:44.078786 | orchestrator | 2026-04-01 00:55:44.078794 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-04-01 00:55:44.078800 | orchestrator | Wednesday 01 April 2026 00:53:34 +0000 (0:00:01.015) 0:00:07.632 ******* 2026-04-01 00:55:44.078807 | orchestrator | ok: [testbed-manager] 2026-04-01 00:55:44.078813 | orchestrator | 2026-04-01 00:55:44.078819 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-04-01 00:55:44.078826 | orchestrator | Wednesday 01 April 2026 00:53:35 +0000 (0:00:00.526) 0:00:08.159 ******* 2026-04-01 00:55:44.078833 | orchestrator | ok: [testbed-manager] 2026-04-01 00:55:44.078854 | orchestrator | 2026-04-01 00:55:44.078861 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 00:55:44.078868 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 00:55:44.078876 | orchestrator | 2026-04-01 00:55:44.078883 | orchestrator | 2026-04-01 00:55:44.078890 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 00:55:44.078896 | orchestrator | Wednesday 01 April 2026 00:53:35 +0000 (0:00:00.390) 0:00:08.549 ******* 2026-04-01 00:55:44.078903 | orchestrator | =============================================================================== 2026-04-01 00:55:44.078909 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.91s 2026-04-01 00:55:44.078916 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.23s 2026-04-01 00:55:44.078922 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 1.02s 2026-04-01 00:55:44.078948 | orchestrator | Get home directory of operator user ------------------------------------- 0.99s 2026-04-01 00:55:44.078957 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.99s 2026-04-01 00:55:44.078964 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.62s 2026-04-01 00:55:44.079002 | orchestrator | Create .kube directory -------------------------------------------------- 0.62s 2026-04-01 00:55:44.079010 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.53s 2026-04-01 00:55:44.079017 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.39s 2026-04-01 00:55:44.079052 | orchestrator | 2026-04-01 00:55:44.079058 | orchestrator | 2026-04-01 00:55:44.079065 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-01 00:55:44.079072 | orchestrator | 2026-04-01 00:55:44.079079 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-01 00:55:44.079087 | orchestrator | Wednesday 01 April 2026 00:50:10 +0000 (0:00:00.494) 0:00:00.494 ******* 2026-04-01 00:55:44.079093 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:55:44.079155 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:55:44.079162 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:55:44.079169 | orchestrator | 2026-04-01 00:55:44.079176 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-01 00:55:44.079183 | orchestrator | Wednesday 01 April 2026 00:50:11 +0000 (0:00:00.441) 0:00:00.935 ******* 2026-04-01 00:55:44.079190 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-04-01 00:55:44.079197 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-04-01 00:55:44.079204 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-04-01 00:55:44.079211 | orchestrator | 2026-04-01 00:55:44.079218 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-04-01 00:55:44.079225 | orchestrator | 2026-04-01 00:55:44.079232 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-04-01 00:55:44.079254 | orchestrator | Wednesday 01 April 2026 00:50:11 +0000 (0:00:00.637) 0:00:01.572 ******* 2026-04-01 00:55:44.079294 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:55:44.079303 | orchestrator | 2026-04-01 00:55:44.079310 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-04-01 00:55:44.079316 | orchestrator | Wednesday 01 April 2026 00:50:13 +0000 (0:00:01.767) 0:00:03.340 ******* 2026-04-01 00:55:44.079422 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:55:44.079431 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:55:44.079437 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:55:44.079444 | orchestrator | 2026-04-01 00:55:44.079450 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-04-01 00:55:44.079458 | orchestrator | Wednesday 01 April 2026 00:50:16 +0000 (0:00:02.522) 0:00:05.862 ******* 2026-04-01 00:55:44.079476 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:55:44.079482 | orchestrator | 2026-04-01 00:55:44.079489 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-04-01 00:55:44.079496 | orchestrator | Wednesday 01 April 2026 00:50:16 +0000 (0:00:00.627) 0:00:06.490 ******* 2026-04-01 00:55:44.079501 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:55:44.079507 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:55:44.079512 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:55:44.079519 | orchestrator | 2026-04-01 00:55:44.079527 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-04-01 00:55:44.079534 | orchestrator | Wednesday 01 April 2026 00:50:17 +0000 (0:00:01.149) 0:00:07.640 ******* 2026-04-01 00:55:44.079578 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-04-01 00:55:44.079613 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-04-01 00:55:44.079621 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-04-01 00:55:44.079627 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-04-01 00:55:44.079634 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-04-01 00:55:44.079641 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-04-01 00:55:44.079646 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-04-01 00:55:44.079653 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-04-01 00:55:44.079663 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-04-01 00:55:44.079671 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-04-01 00:55:44.079679 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-04-01 00:55:44.079686 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-04-01 00:55:44.079692 | orchestrator | 2026-04-01 00:55:44.079699 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-04-01 00:55:44.079706 | orchestrator | Wednesday 01 April 2026 00:50:21 +0000 (0:00:03.157) 0:00:10.798 ******* 2026-04-01 00:55:44.079713 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-04-01 00:55:44.079720 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-04-01 00:55:44.079727 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-04-01 00:55:44.079734 | orchestrator | 2026-04-01 00:55:44.079741 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-04-01 00:55:44.079761 | orchestrator | Wednesday 01 April 2026 00:50:21 +0000 (0:00:00.888) 0:00:11.687 ******* 2026-04-01 00:55:44.079768 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-04-01 00:55:44.079774 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-04-01 00:55:44.079781 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-04-01 00:55:44.079788 | orchestrator | 2026-04-01 00:55:44.079794 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-04-01 00:55:44.079801 | orchestrator | Wednesday 01 April 2026 00:50:23 +0000 (0:00:01.292) 0:00:12.979 ******* 2026-04-01 00:55:44.079807 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-04-01 00:55:44.079881 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:55:44.079891 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-04-01 00:55:44.079899 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:55:44.079905 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-04-01 00:55:44.079912 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:55:44.079918 | orchestrator | 2026-04-01 00:55:44.079925 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-04-01 00:55:44.079942 | orchestrator | Wednesday 01 April 2026 00:50:24 +0000 (0:00:01.313) 0:00:14.292 ******* 2026-04-01 00:55:44.079954 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-01 00:55:44.079971 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-01 00:55:44.079980 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-01 00:55:44.079987 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-01 00:55:44.079994 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-01 00:55:44.080009 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-01 00:55:44.080026 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-01 00:55:44.080036 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-01 00:55:44.080045 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-01 00:55:44.080052 | orchestrator | 2026-04-01 00:55:44.080078 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-04-01 00:55:44.080087 | orchestrator | Wednesday 01 April 2026 00:50:27 +0000 (0:00:02.730) 0:00:17.022 ******* 2026-04-01 00:55:44.080094 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:55:44.080100 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:55:44.080105 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:55:44.080111 | orchestrator | 2026-04-01 00:55:44.080117 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-04-01 00:55:44.080123 | orchestrator | Wednesday 01 April 2026 00:50:28 +0000 (0:00:01.266) 0:00:18.289 ******* 2026-04-01 00:55:44.080129 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-04-01 00:55:44.080134 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-04-01 00:55:44.080141 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-04-01 00:55:44.080147 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-04-01 00:55:44.080154 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-04-01 00:55:44.080160 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-04-01 00:55:44.080184 | orchestrator | 2026-04-01 00:55:44.080191 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-04-01 00:55:44.080198 | orchestrator | Wednesday 01 April 2026 00:50:31 +0000 (0:00:03.012) 0:00:21.301 ******* 2026-04-01 00:55:44.080205 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:55:44.080212 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:55:44.080219 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:55:44.080227 | orchestrator | 2026-04-01 00:55:44.080234 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-04-01 00:55:44.080241 | orchestrator | Wednesday 01 April 2026 00:50:32 +0000 (0:00:01.109) 0:00:22.411 ******* 2026-04-01 00:55:44.080247 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:55:44.080254 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:55:44.080261 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:55:44.080305 | orchestrator | 2026-04-01 00:55:44.080313 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-04-01 00:55:44.080319 | orchestrator | Wednesday 01 April 2026 00:50:34 +0000 (0:00:01.837) 0:00:24.249 ******* 2026-04-01 00:55:44.080342 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-01 00:55:44.080350 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-01 00:55:44.080356 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-01 00:55:44.080603 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release//haproxy-ssh:9.6.20260328', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__32a523c9891ee977e55041531215e2c0129a54ca', '__omit_place_holder__32a523c9891ee977e55041531215e2c0129a54ca'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-01 00:55:44.080612 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:55:44.080636 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-01 00:55:44.080664 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-01 00:55:44.080697 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-01 00:55:44.080746 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release//haproxy-ssh:9.6.20260328', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__32a523c9891ee977e55041531215e2c0129a54ca', '__omit_place_holder__32a523c9891ee977e55041531215e2c0129a54ca'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-01 00:55:44.080754 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:55:44.080762 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-01 00:55:44.080774 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-01 00:55:44.080782 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-01 00:55:44.080788 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release//haproxy-ssh:9.6.20260328', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__32a523c9891ee977e55041531215e2c0129a54ca', '__omit_place_holder__32a523c9891ee977e55041531215e2c0129a54ca'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-01 00:55:44.080795 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:55:44.080808 | orchestrator | 2026-04-01 00:55:44.080815 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-04-01 00:55:44.080823 | orchestrator | Wednesday 01 April 2026 00:50:35 +0000 (0:00:01.287) 0:00:25.536 ******* 2026-04-01 00:55:44.080830 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-01 00:55:44.080844 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-01 00:55:44.080851 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-01 00:55:44.080867 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-01 00:55:44.080874 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-01 00:55:44.080880 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release//haproxy-ssh:9.6.20260328', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__32a523c9891ee977e55041531215e2c0129a54ca', '__omit_place_holder__32a523c9891ee977e55041531215e2c0129a54ca'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-01 00:55:44.080893 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-01 00:55:44.080906 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-01 00:55:44.080914 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-01 00:55:44.080921 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release//haproxy-ssh:9.6.20260328', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__32a523c9891ee977e55041531215e2c0129a54ca', '__omit_place_holder__32a523c9891ee977e55041531215e2c0129a54ca'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-01 00:55:44.080932 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-01 00:55:44.080940 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release//haproxy-ssh:9.6.20260328', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__32a523c9891ee977e55041531215e2c0129a54ca', '__omit_place_holder__32a523c9891ee977e55041531215e2c0129a54ca'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-01 00:55:44.080953 | orchestrator | 2026-04-01 00:55:44.080960 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-04-01 00:55:44.080967 | orchestrator | Wednesday 01 April 2026 00:50:39 +0000 (0:00:03.837) 0:00:29.373 ******* 2026-04-01 00:55:44.080975 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-01 00:55:44.086248 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-01 00:55:44.086327 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-01 00:55:44.086334 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-01 00:55:44.086354 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-01 00:55:44.086360 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-01 00:55:44.086383 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-01 00:55:44.086414 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-01 00:55:44.086437 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-01 00:55:44.086444 | orchestrator | 2026-04-01 00:55:44.086451 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-04-01 00:55:44.086460 | orchestrator | Wednesday 01 April 2026 00:50:43 +0000 (0:00:03.646) 0:00:33.020 ******* 2026-04-01 00:55:44.086467 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-04-01 00:55:44.086474 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-04-01 00:55:44.086479 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-04-01 00:55:44.086486 | orchestrator | 2026-04-01 00:55:44.086493 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-04-01 00:55:44.086499 | orchestrator | Wednesday 01 April 2026 00:50:45 +0000 (0:00:01.847) 0:00:34.867 ******* 2026-04-01 00:55:44.086506 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-04-01 00:55:44.086513 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-04-01 00:55:44.086519 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-04-01 00:55:44.086526 | orchestrator | 2026-04-01 00:55:44.086533 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-04-01 00:55:44.086539 | orchestrator | Wednesday 01 April 2026 00:50:48 +0000 (0:00:03.514) 0:00:38.382 ******* 2026-04-01 00:55:44.086547 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:55:44.086552 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:55:44.086556 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:55:44.086559 | orchestrator | 2026-04-01 00:55:44.086564 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-04-01 00:55:44.086576 | orchestrator | Wednesday 01 April 2026 00:50:49 +0000 (0:00:00.563) 0:00:38.945 ******* 2026-04-01 00:55:44.086583 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-04-01 00:55:44.086597 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-04-01 00:55:44.086603 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-04-01 00:55:44.086609 | orchestrator | 2026-04-01 00:55:44.086615 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-04-01 00:55:44.086621 | orchestrator | Wednesday 01 April 2026 00:50:51 +0000 (0:00:02.238) 0:00:41.183 ******* 2026-04-01 00:55:44.086627 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-04-01 00:55:44.086632 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-04-01 00:55:44.086638 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-04-01 00:55:44.086643 | orchestrator | 2026-04-01 00:55:44.086649 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-04-01 00:55:44.086656 | orchestrator | Wednesday 01 April 2026 00:50:53 +0000 (0:00:02.259) 0:00:43.443 ******* 2026-04-01 00:55:44.086662 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:55:44.086669 | orchestrator | 2026-04-01 00:55:44.086675 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-04-01 00:55:44.086681 | orchestrator | Wednesday 01 April 2026 00:50:54 +0000 (0:00:00.478) 0:00:43.921 ******* 2026-04-01 00:55:44.086688 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-04-01 00:55:44.086694 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-04-01 00:55:44.086700 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-04-01 00:55:44.086707 | orchestrator | 2026-04-01 00:55:44.086715 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-04-01 00:55:44.086724 | orchestrator | Wednesday 01 April 2026 00:50:56 +0000 (0:00:02.017) 0:00:45.938 ******* 2026-04-01 00:55:44.086730 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-04-01 00:55:44.086737 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-04-01 00:55:44.086742 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-04-01 00:55:44.086748 | orchestrator | 2026-04-01 00:55:44.086754 | orchestrator | TASK [loadbalancer : Copying over proxysql-cert.pem] *************************** 2026-04-01 00:55:44.086760 | orchestrator | Wednesday 01 April 2026 00:50:57 +0000 (0:00:01.583) 0:00:47.522 ******* 2026-04-01 00:55:44.086765 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:55:44.086771 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:55:44.086778 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:55:44.086784 | orchestrator | 2026-04-01 00:55:44.086789 | orchestrator | TASK [loadbalancer : Copying over proxysql-key.pem] **************************** 2026-04-01 00:55:44.086796 | orchestrator | Wednesday 01 April 2026 00:50:58 +0000 (0:00:00.269) 0:00:47.792 ******* 2026-04-01 00:55:44.086802 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:55:44.086808 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:55:44.086815 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:55:44.086821 | orchestrator | 2026-04-01 00:55:44.086835 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-04-01 00:55:44.086839 | orchestrator | Wednesday 01 April 2026 00:50:58 +0000 (0:00:00.253) 0:00:48.045 ******* 2026-04-01 00:55:44.086844 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-01 00:55:44.086855 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-01 00:55:44.086863 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-01 00:55:44.086867 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-01 00:55:44.086872 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-01 00:55:44.086876 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-01 00:55:44.086885 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-01 00:55:44.086907 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-01 00:55:44.086912 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-01 00:55:44.086916 | orchestrator | 2026-04-01 00:55:44.086922 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-04-01 00:55:44.086927 | orchestrator | Wednesday 01 April 2026 00:51:01 +0000 (0:00:02.937) 0:00:50.982 ******* 2026-04-01 00:55:44.086931 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-01 00:55:44.086935 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-01 00:55:44.086939 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-01 00:55:44.086943 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:55:44.086950 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-01 00:55:44.086959 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-01 00:55:44.086963 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-01 00:55:44.086967 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:55:44.086974 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-01 00:55:44.086978 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-01 00:55:44.086982 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-01 00:55:44.086986 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:55:44.086990 | orchestrator | 2026-04-01 00:55:44.086994 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-04-01 00:55:44.086998 | orchestrator | Wednesday 01 April 2026 00:51:01 +0000 (0:00:00.500) 0:00:51.482 ******* 2026-04-01 00:55:44.087002 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-01 00:55:44.087014 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-01 00:55:44.087018 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-01 00:55:44.087022 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:55:44.087029 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-01 00:55:44.087033 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-01 00:55:44.087037 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-01 00:55:44.087041 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:55:44.087045 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-01 00:55:44.087057 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-01 00:55:44.087061 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-01 00:55:44.087065 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:55:44.087069 | orchestrator | 2026-04-01 00:55:44.087073 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-04-01 00:55:44.087077 | orchestrator | Wednesday 01 April 2026 00:51:02 +0000 (0:00:00.711) 0:00:52.194 ******* 2026-04-01 00:55:44.087081 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-04-01 00:55:44.087085 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-04-01 00:55:44.087089 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-04-01 00:55:44.087093 | orchestrator | 2026-04-01 00:55:44.087097 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-04-01 00:55:44.087101 | orchestrator | Wednesday 01 April 2026 00:51:04 +0000 (0:00:01.736) 0:00:53.931 ******* 2026-04-01 00:55:44.087104 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-04-01 00:55:44.087111 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-04-01 00:55:44.087115 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-04-01 00:55:44.087119 | orchestrator | 2026-04-01 00:55:44.087123 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-04-01 00:55:44.087127 | orchestrator | Wednesday 01 April 2026 00:51:06 +0000 (0:00:01.828) 0:00:55.760 ******* 2026-04-01 00:55:44.087131 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-04-01 00:55:44.087135 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-04-01 00:55:44.087139 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-04-01 00:55:44.087143 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-01 00:55:44.087146 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:55:44.087150 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-01 00:55:44.087154 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:55:44.087158 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-01 00:55:44.087162 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:55:44.087166 | orchestrator | 2026-04-01 00:55:44.087169 | orchestrator | TASK [service-check-containers : loadbalancer | Check containers] ************** 2026-04-01 00:55:44.087173 | orchestrator | Wednesday 01 April 2026 00:51:06 +0000 (0:00:00.831) 0:00:56.591 ******* 2026-04-01 00:55:44.087184 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-01 00:55:44.087192 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-01 00:55:44.087196 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-01 00:55:44.087201 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-01 00:55:44.087207 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-01 00:55:44.087212 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-01 00:55:44.087219 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-01 00:55:44.087223 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-01 00:55:44.087231 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-01 00:55:44.087235 | orchestrator | 2026-04-01 00:55:44.087239 | orchestrator | TASK [service-check-containers : loadbalancer | Notify handlers to restart containers] *** 2026-04-01 00:55:44.087243 | orchestrator | Wednesday 01 April 2026 00:51:09 +0000 (0:00:02.403) 0:00:58.994 ******* 2026-04-01 00:55:44.087247 | orchestrator | changed: [testbed-node-0] => { 2026-04-01 00:55:44.087251 | orchestrator |  "msg": "Notifying handlers" 2026-04-01 00:55:44.087255 | orchestrator | } 2026-04-01 00:55:44.087267 | orchestrator | changed: [testbed-node-1] => { 2026-04-01 00:55:44.087271 | orchestrator |  "msg": "Notifying handlers" 2026-04-01 00:55:44.087275 | orchestrator | } 2026-04-01 00:55:44.087279 | orchestrator | changed: [testbed-node-2] => { 2026-04-01 00:55:44.087283 | orchestrator |  "msg": "Notifying handlers" 2026-04-01 00:55:44.087287 | orchestrator | } 2026-04-01 00:55:44.087290 | orchestrator | 2026-04-01 00:55:44.087294 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-01 00:55:44.087298 | orchestrator | Wednesday 01 April 2026 00:51:09 +0000 (0:00:00.320) 0:00:59.315 ******* 2026-04-01 00:55:44.087302 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-01 00:55:44.087309 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-01 00:55:44.087317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-01 00:55:44.087321 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:55:44.087325 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-01 00:55:44.087329 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-01 00:55:44.087343 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-01 00:55:44.087347 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:55:44.087351 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-01 00:55:44.087358 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-01 00:55:44.087362 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-01 00:55:44.087369 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:55:44.087373 | orchestrator | 2026-04-01 00:55:44.087377 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-04-01 00:55:44.087381 | orchestrator | Wednesday 01 April 2026 00:51:10 +0000 (0:00:01.178) 0:01:00.493 ******* 2026-04-01 00:55:44.087385 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:55:44.087406 | orchestrator | 2026-04-01 00:55:44.087413 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-04-01 00:55:44.087419 | orchestrator | Wednesday 01 April 2026 00:51:11 +0000 (0:00:00.726) 0:01:01.220 ******* 2026-04-01 00:55:44.087429 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-01 00:55:44.087440 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-01 00:55:44.087445 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.087450 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.087457 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-01 00:55:44.087465 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-01 00:55:44.087469 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.087476 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.087480 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-01 00:55:44.087488 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-01 00:55:44.087496 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.087500 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.087504 | orchestrator | 2026-04-01 00:55:44.087507 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-04-01 00:55:44.087511 | orchestrator | Wednesday 01 April 2026 00:51:14 +0000 (0:00:03.064) 0:01:04.285 ******* 2026-04-01 00:55:44.087516 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-01 00:55:44.087524 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-01 00:55:44.087528 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.087539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.087548 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:55:44.087556 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-01 00:55:44.087586 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-01 00:55:44.087593 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.087605 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.087612 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:55:44.087618 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-01 00:55:44.087633 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-01 00:55:44.087641 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.087647 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.087654 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:55:44.087660 | orchestrator | 2026-04-01 00:55:44.087667 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-04-01 00:55:44.087673 | orchestrator | Wednesday 01 April 2026 00:51:15 +0000 (0:00:00.678) 0:01:04.963 ******* 2026-04-01 00:55:44.087680 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-04-01 00:55:44.087689 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-04-01 00:55:44.087701 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-04-01 00:55:44.087709 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:55:44.087716 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-04-01 00:55:44.087731 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:55:44.087737 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-04-01 00:55:44.087743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-04-01 00:55:44.087749 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:55:44.087756 | orchestrator | 2026-04-01 00:55:44.087761 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-04-01 00:55:44.087768 | orchestrator | Wednesday 01 April 2026 00:51:16 +0000 (0:00:01.006) 0:01:05.969 ******* 2026-04-01 00:55:44.087774 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:55:44.087781 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:55:44.087786 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:55:44.087793 | orchestrator | 2026-04-01 00:55:44.087799 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-04-01 00:55:44.087805 | orchestrator | Wednesday 01 April 2026 00:51:17 +0000 (0:00:01.316) 0:01:07.286 ******* 2026-04-01 00:55:44.087812 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:55:44.087816 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:55:44.087820 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:55:44.087824 | orchestrator | 2026-04-01 00:55:44.087831 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-04-01 00:55:44.087835 | orchestrator | Wednesday 01 April 2026 00:51:19 +0000 (0:00:02.200) 0:01:09.487 ******* 2026-04-01 00:55:44.087839 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:55:44.087843 | orchestrator | 2026-04-01 00:55:44.087847 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-04-01 00:55:44.087851 | orchestrator | Wednesday 01 April 2026 00:51:20 +0000 (0:00:00.585) 0:01:10.072 ******* 2026-04-01 00:55:44.087855 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-01 00:55:44.087861 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-01 00:55:44.087876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.087881 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.087889 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.087893 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.087898 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-01 00:55:44.088208 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.088289 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.088299 | orchestrator | 2026-04-01 00:55:44.088307 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-04-01 00:55:44.088315 | orchestrator | Wednesday 01 April 2026 00:51:23 +0000 (0:00:03.322) 0:01:13.394 ******* 2026-04-01 00:55:44.088329 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-01 00:55:44.088339 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.088345 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.088352 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:55:44.088376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-01 00:55:44.088384 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.088448 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.088457 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:55:44.088464 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-01 00:55:44.088470 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.088482 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.088495 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:55:44.088502 | orchestrator | 2026-04-01 00:55:44.088509 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-04-01 00:55:44.088518 | orchestrator | Wednesday 01 April 2026 00:51:24 +0000 (0:00:00.895) 0:01:14.289 ******* 2026-04-01 00:55:44.088526 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-01 00:55:44.088537 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-01 00:55:44.088546 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:55:44.088553 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-01 00:55:44.088560 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-01 00:55:44.088566 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:55:44.088572 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-01 00:55:44.088582 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-01 00:55:44.088591 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:55:44.088597 | orchestrator | 2026-04-01 00:55:44.088604 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-04-01 00:55:44.088610 | orchestrator | Wednesday 01 April 2026 00:51:25 +0000 (0:00:00.869) 0:01:15.159 ******* 2026-04-01 00:55:44.088617 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:55:44.088623 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:55:44.088630 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:55:44.088637 | orchestrator | 2026-04-01 00:55:44.088643 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-04-01 00:55:44.088650 | orchestrator | Wednesday 01 April 2026 00:51:26 +0000 (0:00:01.257) 0:01:16.417 ******* 2026-04-01 00:55:44.088656 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:55:44.088664 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:55:44.088670 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:55:44.088676 | orchestrator | 2026-04-01 00:55:44.088682 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-04-01 00:55:44.088696 | orchestrator | Wednesday 01 April 2026 00:51:28 +0000 (0:00:02.088) 0:01:18.506 ******* 2026-04-01 00:55:44.088704 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:55:44.088711 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:55:44.088718 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:55:44.088724 | orchestrator | 2026-04-01 00:55:44.088732 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-04-01 00:55:44.088739 | orchestrator | Wednesday 01 April 2026 00:51:29 +0000 (0:00:00.268) 0:01:18.774 ******* 2026-04-01 00:55:44.088746 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:55:44.088755 | orchestrator | 2026-04-01 00:55:44.088763 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-04-01 00:55:44.088771 | orchestrator | Wednesday 01 April 2026 00:51:29 +0000 (0:00:00.861) 0:01:19.636 ******* 2026-04-01 00:55:44.088781 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-04-01 00:55:44.088800 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-04-01 00:55:44.088810 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-04-01 00:55:44.088819 | orchestrator | 2026-04-01 00:55:44.088832 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-04-01 00:55:44.088841 | orchestrator | Wednesday 01 April 2026 00:51:32 +0000 (0:00:03.041) 0:01:22.678 ******* 2026-04-01 00:55:44.088849 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-04-01 00:55:44.088863 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:55:44.088873 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-04-01 00:55:44.088882 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:55:44.088895 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-04-01 00:55:44.088904 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:55:44.088912 | orchestrator | 2026-04-01 00:55:44.088919 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-04-01 00:55:44.088926 | orchestrator | Wednesday 01 April 2026 00:51:34 +0000 (0:00:01.743) 0:01:24.421 ******* 2026-04-01 00:55:44.088934 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-01 00:55:44.088943 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-01 00:55:44.088951 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:55:44.088963 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-01 00:55:44.088978 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-01 00:55:44.088986 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:55:44.088993 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-01 00:55:44.089001 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-01 00:55:44.089008 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:55:44.089015 | orchestrator | 2026-04-01 00:55:44.089022 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-04-01 00:55:44.089028 | orchestrator | Wednesday 01 April 2026 00:51:36 +0000 (0:00:02.075) 0:01:26.496 ******* 2026-04-01 00:55:44.089035 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:55:44.089042 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:55:44.089048 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:55:44.089055 | orchestrator | 2026-04-01 00:55:44.089062 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-04-01 00:55:44.089069 | orchestrator | Wednesday 01 April 2026 00:51:37 +0000 (0:00:00.433) 0:01:26.929 ******* 2026-04-01 00:55:44.089076 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:55:44.089084 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:55:44.089091 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:55:44.089099 | orchestrator | 2026-04-01 00:55:44.089106 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-04-01 00:55:44.089113 | orchestrator | Wednesday 01 April 2026 00:51:38 +0000 (0:00:01.190) 0:01:28.120 ******* 2026-04-01 00:55:44.089120 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:55:44.089127 | orchestrator | 2026-04-01 00:55:44.089134 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-04-01 00:55:44.089141 | orchestrator | Wednesday 01 April 2026 00:51:39 +0000 (0:00:00.862) 0:01:28.982 ******* 2026-04-01 00:55:44.089158 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-01 00:55:44.089174 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.089186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.089195 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.089202 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-01 00:55:44.089215 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.089222 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.089241 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.089250 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-01 00:55:44.089259 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.089273 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.089281 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.089293 | orchestrator | 2026-04-01 00:55:44.089300 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-04-01 00:55:44.089307 | orchestrator | Wednesday 01 April 2026 00:51:42 +0000 (0:00:03.759) 0:01:32.742 ******* 2026-04-01 00:55:44.089318 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-01 00:55:44.089326 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.089334 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.089347 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.089356 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:55:44.089369 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-01 00:55:44.089383 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.089415 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.089424 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.089432 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:55:44.089445 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-01 00:55:44.089459 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.089471 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.089480 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.089487 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:55:44.089495 | orchestrator | 2026-04-01 00:55:44.089502 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-04-01 00:55:44.089511 | orchestrator | Wednesday 01 April 2026 00:51:43 +0000 (0:00:00.846) 0:01:33.589 ******* 2026-04-01 00:55:44.089518 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-01 00:55:44.089528 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-01 00:55:44.089537 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:55:44.089545 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-01 00:55:44.089553 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-01 00:55:44.089561 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:55:44.089568 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-01 00:55:44.089586 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-01 00:55:44.089595 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:55:44.089602 | orchestrator | 2026-04-01 00:55:44.089609 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-04-01 00:55:44.089616 | orchestrator | Wednesday 01 April 2026 00:51:45 +0000 (0:00:01.273) 0:01:34.863 ******* 2026-04-01 00:55:44.089623 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:55:44.089630 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:55:44.089637 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:55:44.089644 | orchestrator | 2026-04-01 00:55:44.089651 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-04-01 00:55:44.089659 | orchestrator | Wednesday 01 April 2026 00:51:46 +0000 (0:00:01.261) 0:01:36.125 ******* 2026-04-01 00:55:44.089666 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:55:44.089672 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:55:44.089679 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:55:44.089686 | orchestrator | 2026-04-01 00:55:44.089693 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-04-01 00:55:44.089700 | orchestrator | Wednesday 01 April 2026 00:51:48 +0000 (0:00:01.994) 0:01:38.119 ******* 2026-04-01 00:55:44.089707 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:55:44.089714 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:55:44.089721 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:55:44.089729 | orchestrator | 2026-04-01 00:55:44.089736 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-04-01 00:55:44.089743 | orchestrator | Wednesday 01 April 2026 00:51:48 +0000 (0:00:00.283) 0:01:38.402 ******* 2026-04-01 00:55:44.089750 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:55:44.089758 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:55:44.089765 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:55:44.089772 | orchestrator | 2026-04-01 00:55:44.089780 | orchestrator | TASK [include_role : designate] ************************************************ 2026-04-01 00:55:44.089788 | orchestrator | Wednesday 01 April 2026 00:51:49 +0000 (0:00:00.363) 0:01:38.766 ******* 2026-04-01 00:55:44.089795 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:55:44.089802 | orchestrator | 2026-04-01 00:55:44.089814 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-04-01 00:55:44.089836 | orchestrator | Wednesday 01 April 2026 00:51:49 +0000 (0:00:00.966) 0:01:39.733 ******* 2026-04-01 00:55:44.089846 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-01 00:55:44.089857 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-01 00:55:44.089873 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.089888 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.089896 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.089907 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.089915 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release//designate-sink:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.089924 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-01 00:55:44.089939 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-01 00:55:44.089947 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.089955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.089964 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.089971 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.089982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release//designate-sink:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.089994 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-01 00:55:44.090002 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-01 00:55:44.090010 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.090065 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.090074 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.090087 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.090095 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release//designate-sink:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.090102 | orchestrator | 2026-04-01 00:55:44.090108 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-04-01 00:55:44.090115 | orchestrator | Wednesday 01 April 2026 00:51:54 +0000 (0:00:04.425) 0:01:44.158 ******* 2026-04-01 00:55:44.090128 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-01 00:55:44.090135 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-01 00:55:44.090145 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.090158 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.090166 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.090197 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.090205 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release//designate-sink:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.090212 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:55:44.090223 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-01 00:55:44.090231 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-01 00:55:44.090243 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.090250 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.090261 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.090268 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-01 00:55:44.090279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.090291 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-01 00:55:44.090299 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release//designate-sink:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.090306 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:55:44.090313 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.090337 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.090344 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.090351 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.090364 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release//designate-sink:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.090371 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:55:44.090377 | orchestrator | 2026-04-01 00:55:44.090383 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-04-01 00:55:44.090412 | orchestrator | Wednesday 01 April 2026 00:51:55 +0000 (0:00:01.023) 0:01:45.181 ******* 2026-04-01 00:55:44.090421 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-04-01 00:55:44.090430 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-04-01 00:55:44.090437 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:55:44.090443 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-04-01 00:55:44.090473 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-04-01 00:55:44.090481 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:55:44.090488 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-04-01 00:55:44.090495 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-04-01 00:55:44.090502 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:55:44.090509 | orchestrator | 2026-04-01 00:55:44.090516 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-04-01 00:55:44.090525 | orchestrator | Wednesday 01 April 2026 00:51:56 +0000 (0:00:01.473) 0:01:46.655 ******* 2026-04-01 00:55:44.090533 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:55:44.090546 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:55:44.090553 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:55:44.090560 | orchestrator | 2026-04-01 00:55:44.090567 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-04-01 00:55:44.090573 | orchestrator | Wednesday 01 April 2026 00:51:58 +0000 (0:00:01.315) 0:01:47.970 ******* 2026-04-01 00:55:44.090580 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:55:44.090586 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:55:44.090591 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:55:44.090597 | orchestrator | 2026-04-01 00:55:44.090602 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-04-01 00:55:44.090608 | orchestrator | Wednesday 01 April 2026 00:52:00 +0000 (0:00:02.196) 0:01:50.167 ******* 2026-04-01 00:55:44.090614 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:55:44.090619 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:55:44.090624 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:55:44.090636 | orchestrator | 2026-04-01 00:55:44.090642 | orchestrator | TASK [include_role : glance] *************************************************** 2026-04-01 00:55:44.090649 | orchestrator | Wednesday 01 April 2026 00:52:00 +0000 (0:00:00.350) 0:01:50.517 ******* 2026-04-01 00:55:44.090656 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:55:44.090662 | orchestrator | 2026-04-01 00:55:44.090670 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-04-01 00:55:44.090676 | orchestrator | Wednesday 01 April 2026 00:52:01 +0000 (0:00:00.791) 0:01:51.309 ******* 2026-04-01 00:55:44.090688 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release//glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-01 00:55:44.090745 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release//glance-tls-proxy:30.1.1.20260328', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-01 00:55:44.090765 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release//glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-01 00:55:44.090778 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release//glance-tls-proxy:30.1.1.20260328', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-01 00:55:44.090791 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release//glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-01 00:55:44.090800 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release//glance-tls-proxy:30.1.1.20260328', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-01 00:55:44.090807 | orchestrator | 2026-04-01 00:55:44.090814 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-04-01 00:55:44.090825 | orchestrator | Wednesday 01 April 2026 00:52:07 +0000 (0:00:05.708) 0:01:57.017 ******* 2026-04-01 00:55:44.090837 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release//glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-01 00:55:44.090851 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release//glance-tls-proxy:30.1.1.20260328', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-01 00:55:44.090857 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:55:44.090870 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release//glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-01 00:55:44.090884 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release//glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-01 00:55:44.090897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release//glance-tls-proxy:30.1.1.20260328', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-01 00:55:44.090911 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:55:44.090921 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release//glance-tls-proxy:30.1.1.20260328', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-01 00:55:44.090929 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:55:44.090936 | orchestrator | 2026-04-01 00:55:44.090943 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-04-01 00:55:44.090950 | orchestrator | Wednesday 01 April 2026 00:52:10 +0000 (0:00:03.397) 0:02:00.414 ******* 2026-04-01 00:55:44.090957 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-01 00:55:44.090972 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-01 00:55:44.090984 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-01 00:55:44.090992 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:55:44.091000 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-01 00:55:44.091007 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:55:44.091017 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-01 00:55:44.091024 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-01 00:55:44.091031 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:55:44.091038 | orchestrator | 2026-04-01 00:55:44.091044 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-04-01 00:55:44.091052 | orchestrator | Wednesday 01 April 2026 00:52:14 +0000 (0:00:03.608) 0:02:04.023 ******* 2026-04-01 00:55:44.091059 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:55:44.091066 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:55:44.091073 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:55:44.091080 | orchestrator | 2026-04-01 00:55:44.091087 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-04-01 00:55:44.091093 | orchestrator | Wednesday 01 April 2026 00:52:15 +0000 (0:00:01.628) 0:02:05.652 ******* 2026-04-01 00:55:44.091100 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:55:44.091106 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:55:44.091112 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:55:44.091119 | orchestrator | 2026-04-01 00:55:44.091126 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-04-01 00:55:44.091132 | orchestrator | Wednesday 01 April 2026 00:52:17 +0000 (0:00:02.084) 0:02:07.737 ******* 2026-04-01 00:55:44.091144 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:55:44.091150 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:55:44.091157 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:55:44.091164 | orchestrator | 2026-04-01 00:55:44.091171 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-04-01 00:55:44.091177 | orchestrator | Wednesday 01 April 2026 00:52:18 +0000 (0:00:00.283) 0:02:08.020 ******* 2026-04-01 00:55:44.091184 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:55:44.091190 | orchestrator | 2026-04-01 00:55:44.091197 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-04-01 00:55:44.091203 | orchestrator | Wednesday 01 April 2026 00:52:19 +0000 (0:00:00.751) 0:02:08.771 ******* 2026-04-01 00:55:44.091218 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-01 00:55:44.091226 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-01 00:55:44.091238 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-01 00:55:44.091244 | orchestrator | 2026-04-01 00:55:44.091252 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-04-01 00:55:44.091259 | orchestrator | Wednesday 01 April 2026 00:52:21 +0000 (0:00:02.871) 0:02:11.642 ******* 2026-04-01 00:55:44.091266 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-01 00:55:44.091279 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:55:44.091287 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-01 00:55:44.091294 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:55:44.091306 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-01 00:55:44.091313 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:55:44.091320 | orchestrator | 2026-04-01 00:55:44.091326 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-04-01 00:55:44.091334 | orchestrator | Wednesday 01 April 2026 00:52:22 +0000 (0:00:00.319) 0:02:11.962 ******* 2026-04-01 00:55:44.091342 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-04-01 00:55:44.091349 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-04-01 00:55:44.091356 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:55:44.091363 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-04-01 00:55:44.091373 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-04-01 00:55:44.091380 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:55:44.091386 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-04-01 00:55:44.091458 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-04-01 00:55:44.091467 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:55:44.091480 | orchestrator | 2026-04-01 00:55:44.091487 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-04-01 00:55:44.091494 | orchestrator | Wednesday 01 April 2026 00:52:22 +0000 (0:00:00.575) 0:02:12.538 ******* 2026-04-01 00:55:44.091501 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:55:44.091508 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:55:44.091514 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:55:44.091521 | orchestrator | 2026-04-01 00:55:44.091529 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-04-01 00:55:44.091536 | orchestrator | Wednesday 01 April 2026 00:52:24 +0000 (0:00:01.252) 0:02:13.790 ******* 2026-04-01 00:55:44.091543 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:55:44.091550 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:55:44.091557 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:55:44.091564 | orchestrator | 2026-04-01 00:55:44.091572 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-04-01 00:55:44.091579 | orchestrator | Wednesday 01 April 2026 00:52:25 +0000 (0:00:01.884) 0:02:15.675 ******* 2026-04-01 00:55:44.091586 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:55:44.091593 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:55:44.091600 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:55:44.091607 | orchestrator | 2026-04-01 00:55:44.091614 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-04-01 00:55:44.091621 | orchestrator | Wednesday 01 April 2026 00:52:26 +0000 (0:00:00.551) 0:02:16.227 ******* 2026-04-01 00:55:44.091628 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:55:44.091636 | orchestrator | 2026-04-01 00:55:44.091643 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-04-01 00:55:44.091649 | orchestrator | Wednesday 01 April 2026 00:52:27 +0000 (0:00:00.875) 0:02:17.102 ******* 2026-04-01 00:55:44.091668 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-01 00:55:44.091689 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-01 00:55:44.091710 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-01 00:55:44.091723 | orchestrator | 2026-04-01 00:55:44.091730 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-04-01 00:55:44.091737 | orchestrator | Wednesday 01 April 2026 00:52:31 +0000 (0:00:03.877) 0:02:20.980 ******* 2026-04-01 00:55:44.091749 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-01 00:55:44.091756 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:55:44.091768 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-01 00:55:44.091782 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:55:44.091795 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-01 00:55:44.091804 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:55:44.091811 | orchestrator | 2026-04-01 00:55:44.091818 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-04-01 00:55:44.091824 | orchestrator | Wednesday 01 April 2026 00:52:32 +0000 (0:00:00.929) 0:02:21.909 ******* 2026-04-01 00:55:44.091832 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-04-01 00:55:44.091844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-04-01 00:55:44.091855 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-01 00:55:44.091863 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-01 00:55:44.091870 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-04-01 00:55:44.091879 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-04-01 00:55:44.091887 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-01 00:55:44.091894 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-01 00:55:44.091901 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-04-01 00:55:44.091909 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:55:44.091916 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-04-01 00:55:44.091923 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:55:44.091936 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-04-01 00:55:44.091943 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-01 00:55:44.091951 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-04-01 00:55:44.091966 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-01 00:55:44.091973 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-04-01 00:55:44.091980 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:55:44.091988 | orchestrator | 2026-04-01 00:55:44.091995 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-04-01 00:55:44.092006 | orchestrator | Wednesday 01 April 2026 00:52:33 +0000 (0:00:00.985) 0:02:22.895 ******* 2026-04-01 00:55:44.092014 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:55:44.092022 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:55:44.092029 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:55:44.092036 | orchestrator | 2026-04-01 00:55:44.092043 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-04-01 00:55:44.092050 | orchestrator | Wednesday 01 April 2026 00:52:34 +0000 (0:00:01.258) 0:02:24.154 ******* 2026-04-01 00:55:44.092057 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:55:44.092065 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:55:44.092072 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:55:44.092079 | orchestrator | 2026-04-01 00:55:44.092086 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-04-01 00:55:44.092092 | orchestrator | Wednesday 01 April 2026 00:52:36 +0000 (0:00:02.138) 0:02:26.292 ******* 2026-04-01 00:55:44.092099 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:55:44.092105 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:55:44.092112 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:55:44.092118 | orchestrator | 2026-04-01 00:55:44.092125 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-04-01 00:55:44.092132 | orchestrator | Wednesday 01 April 2026 00:52:37 +0000 (0:00:00.507) 0:02:26.799 ******* 2026-04-01 00:55:44.092139 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:55:44.092146 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:55:44.092152 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:55:44.092159 | orchestrator | 2026-04-01 00:55:44.092165 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-04-01 00:55:44.092172 | orchestrator | Wednesday 01 April 2026 00:52:37 +0000 (0:00:00.306) 0:02:27.105 ******* 2026-04-01 00:55:44.092179 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:55:44.092186 | orchestrator | 2026-04-01 00:55:44.092192 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-04-01 00:55:44.092200 | orchestrator | Wednesday 01 April 2026 00:52:38 +0000 (0:00:00.886) 0:02:27.991 ******* 2026-04-01 00:55:44.092215 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-01 00:55:44.092231 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-01 00:55:44.092239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-01 00:55:44.092251 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-01 00:55:44.092260 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-01 00:55:44.092267 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-01 00:55:44.092493 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-01 00:55:44.092524 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-01 00:55:44.092541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-01 00:55:44.092548 | orchestrator | 2026-04-01 00:55:44.092556 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-04-01 00:55:44.092562 | orchestrator | Wednesday 01 April 2026 00:52:42 +0000 (0:00:03.783) 0:02:31.775 ******* 2026-04-01 00:55:44.092570 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-01 00:55:44.092578 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-01 00:55:44.092603 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-01 00:55:44.092611 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:55:44.092619 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-01 00:55:44.092631 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-01 00:55:44.092638 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-01 00:55:44.092645 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:55:44.092653 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-01 00:55:44.092670 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-01 00:55:44.092678 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-01 00:55:44.092685 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:55:44.092692 | orchestrator | 2026-04-01 00:55:44.092699 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-04-01 00:55:44.092706 | orchestrator | Wednesday 01 April 2026 00:52:42 +0000 (0:00:00.635) 0:02:32.411 ******* 2026-04-01 00:55:44.092714 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-04-01 00:55:44.092728 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-04-01 00:55:44.092735 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:55:44.092743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-04-01 00:55:44.092750 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-04-01 00:55:44.092757 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:55:44.092764 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-04-01 00:55:44.092771 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-04-01 00:55:44.092782 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:55:44.092789 | orchestrator | 2026-04-01 00:55:44.092796 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-04-01 00:55:44.092803 | orchestrator | Wednesday 01 April 2026 00:52:43 +0000 (0:00:00.792) 0:02:33.204 ******* 2026-04-01 00:55:44.092809 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:55:44.092816 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:55:44.092822 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:55:44.092828 | orchestrator | 2026-04-01 00:55:44.092835 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-04-01 00:55:44.092841 | orchestrator | Wednesday 01 April 2026 00:52:45 +0000 (0:00:01.564) 0:02:34.768 ******* 2026-04-01 00:55:44.092848 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:55:44.092854 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:55:44.092860 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:55:44.092866 | orchestrator | 2026-04-01 00:55:44.092912 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-04-01 00:55:44.092920 | orchestrator | Wednesday 01 April 2026 00:52:46 +0000 (0:00:01.921) 0:02:36.690 ******* 2026-04-01 00:55:44.092927 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:55:44.092934 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:55:44.092941 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:55:44.092947 | orchestrator | 2026-04-01 00:55:44.092954 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-04-01 00:55:44.092961 | orchestrator | Wednesday 01 April 2026 00:52:47 +0000 (0:00:00.533) 0:02:37.224 ******* 2026-04-01 00:55:44.092968 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:55:44.092974 | orchestrator | 2026-04-01 00:55:44.092987 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-04-01 00:55:44.092994 | orchestrator | Wednesday 01 April 2026 00:52:48 +0000 (0:00:00.978) 0:02:38.202 ******* 2026-04-01 00:55:44.093002 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-01 00:55:44.093014 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-01 00:55:44.093028 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.093035 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.093048 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-01 00:55:44.093058 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.093066 | orchestrator | 2026-04-01 00:55:44.093073 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-04-01 00:55:44.093081 | orchestrator | Wednesday 01 April 2026 00:52:52 +0000 (0:00:03.795) 0:02:41.997 ******* 2026-04-01 00:55:44.093093 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-01 00:55:44.093107 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.093115 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:55:44.093128 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-01 00:55:44.093136 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.093143 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:55:44.093155 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-01 00:55:44.093168 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.093175 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:55:44.093182 | orchestrator | 2026-04-01 00:55:44.093189 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-04-01 00:55:44.093195 | orchestrator | Wednesday 01 April 2026 00:52:53 +0000 (0:00:00.874) 0:02:42.872 ******* 2026-04-01 00:55:44.093204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-04-01 00:55:44.093213 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-04-01 00:55:44.093221 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:55:44.093229 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-04-01 00:55:44.093240 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-04-01 00:55:44.093247 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:55:44.093254 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-04-01 00:55:44.093262 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-04-01 00:55:44.093270 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:55:44.093276 | orchestrator | 2026-04-01 00:55:44.093283 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-04-01 00:55:44.093290 | orchestrator | Wednesday 01 April 2026 00:52:53 +0000 (0:00:00.849) 0:02:43.721 ******* 2026-04-01 00:55:44.093298 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:55:44.093304 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:55:44.093311 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:55:44.093323 | orchestrator | 2026-04-01 00:55:44.093329 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-04-01 00:55:44.093336 | orchestrator | Wednesday 01 April 2026 00:52:55 +0000 (0:00:01.426) 0:02:45.147 ******* 2026-04-01 00:55:44.093342 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:55:44.093348 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:55:44.093355 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:55:44.093362 | orchestrator | 2026-04-01 00:55:44.093369 | orchestrator | TASK [include_role : manila] *************************************************** 2026-04-01 00:55:44.093376 | orchestrator | Wednesday 01 April 2026 00:52:57 +0000 (0:00:02.296) 0:02:47.444 ******* 2026-04-01 00:55:44.093383 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:55:44.093414 | orchestrator | 2026-04-01 00:55:44.093422 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-04-01 00:55:44.093429 | orchestrator | Wednesday 01 April 2026 00:52:58 +0000 (0:00:01.197) 0:02:48.642 ******* 2026-04-01 00:55:44.093440 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release//manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-01 00:55:44.093448 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release//manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.093456 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release//manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.093467 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release//manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.093475 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release//manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-01 00:55:44.093487 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release//manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.093494 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release//manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.093500 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release//manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-01 00:55:44.093510 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release//manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.093518 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release//manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.093529 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release//manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.093562 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release//manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.093571 | orchestrator | 2026-04-01 00:55:44.093578 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-04-01 00:55:44.093585 | orchestrator | Wednesday 01 April 2026 00:53:02 +0000 (0:00:03.912) 0:02:52.555 ******* 2026-04-01 00:55:44.093593 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release//manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-01 00:55:44.093600 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release//manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.093612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release//manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.093624 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release//manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.093631 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:55:44.093644 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release//manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-01 00:55:44.093652 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release//manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.093659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release//manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-01 00:55:44.093670 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release//manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.093680 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release//manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.093687 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release//manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.093697 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release//manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.093704 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:55:44.093711 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release//manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.093718 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:55:44.093724 | orchestrator | 2026-04-01 00:55:44.093730 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-04-01 00:55:44.093737 | orchestrator | Wednesday 01 April 2026 00:53:03 +0000 (0:00:00.673) 0:02:53.228 ******* 2026-04-01 00:55:44.093743 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-04-01 00:55:44.093751 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-04-01 00:55:44.093757 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:55:44.093764 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-04-01 00:55:44.093776 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-04-01 00:55:44.093783 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:55:44.093794 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-04-01 00:55:44.093802 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-04-01 00:55:44.093810 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:55:44.093818 | orchestrator | 2026-04-01 00:55:44.093824 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-04-01 00:55:44.093830 | orchestrator | Wednesday 01 April 2026 00:53:05 +0000 (0:00:01.519) 0:02:54.748 ******* 2026-04-01 00:55:44.093836 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:55:44.093843 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:55:44.093849 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:55:44.093856 | orchestrator | 2026-04-01 00:55:44.093862 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-04-01 00:55:44.093867 | orchestrator | Wednesday 01 April 2026 00:53:06 +0000 (0:00:01.532) 0:02:56.280 ******* 2026-04-01 00:55:44.093874 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:55:44.093880 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:55:44.093886 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:55:44.093892 | orchestrator | 2026-04-01 00:55:44.093898 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-04-01 00:55:44.093904 | orchestrator | Wednesday 01 April 2026 00:53:08 +0000 (0:00:02.219) 0:02:58.500 ******* 2026-04-01 00:55:44.093910 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:55:44.093916 | orchestrator | 2026-04-01 00:55:44.093922 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-04-01 00:55:44.093929 | orchestrator | Wednesday 01 April 2026 00:53:09 +0000 (0:00:00.936) 0:02:59.436 ******* 2026-04-01 00:55:44.093936 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-01 00:55:44.093943 | orchestrator | 2026-04-01 00:55:44.093949 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-04-01 00:55:44.093955 | orchestrator | Wednesday 01 April 2026 00:53:11 +0000 (0:00:01.617) 0:03:01.054 ******* 2026-04-01 00:55:44.093967 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-01 00:55:44.093983 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release//mariadb-clustercheck:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-01 00:55:44.093991 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:55:44.094008 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-01 00:55:44.094077 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release//mariadb-clustercheck:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-01 00:55:44.094086 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:55:44.094100 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-01 00:55:44.094113 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release//mariadb-clustercheck:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-01 00:55:44.094120 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:55:44.094127 | orchestrator | 2026-04-01 00:55:44.094134 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-04-01 00:55:44.094140 | orchestrator | Wednesday 01 April 2026 00:53:13 +0000 (0:00:02.495) 0:03:03.550 ******* 2026-04-01 00:55:44.094152 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-01 00:55:44.094164 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release//mariadb-clustercheck:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-01 00:55:44.094171 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:55:44.094184 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-01 00:55:44.094195 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release//mariadb-clustercheck:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-01 00:55:44.094202 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:55:44.094210 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-01 00:55:44.094226 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release//mariadb-clustercheck:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-01 00:55:44.094233 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:55:44.094239 | orchestrator | 2026-04-01 00:55:44.094245 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-04-01 00:55:44.094252 | orchestrator | Wednesday 01 April 2026 00:53:17 +0000 (0:00:03.855) 0:03:07.406 ******* 2026-04-01 00:55:44.094259 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-01 00:55:44.094271 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-01 00:55:44.094278 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:55:44.094286 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-01 00:55:44.094299 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-01 00:55:44.094305 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:55:44.094312 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-01 00:55:44.094335 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-01 00:55:44.094343 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:55:44.094349 | orchestrator | 2026-04-01 00:55:44.094355 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-04-01 00:55:44.094363 | orchestrator | Wednesday 01 April 2026 00:53:20 +0000 (0:00:02.559) 0:03:09.965 ******* 2026-04-01 00:55:44.094368 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:55:44.094376 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:55:44.094382 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:55:44.094388 | orchestrator | 2026-04-01 00:55:44.094456 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-04-01 00:55:44.094462 | orchestrator | Wednesday 01 April 2026 00:53:22 +0000 (0:00:02.088) 0:03:12.054 ******* 2026-04-01 00:55:44.094468 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:55:44.094474 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:55:44.094481 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:55:44.094486 | orchestrator | 2026-04-01 00:55:44.094492 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-04-01 00:55:44.094499 | orchestrator | Wednesday 01 April 2026 00:53:23 +0000 (0:00:01.195) 0:03:13.250 ******* 2026-04-01 00:55:44.094504 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:55:44.094511 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:55:44.094517 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:55:44.094522 | orchestrator | 2026-04-01 00:55:44.094529 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-04-01 00:55:44.094535 | orchestrator | Wednesday 01 April 2026 00:53:23 +0000 (0:00:00.268) 0:03:13.518 ******* 2026-04-01 00:55:44.094541 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:55:44.094555 | orchestrator | 2026-04-01 00:55:44.094561 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-04-01 00:55:44.094567 | orchestrator | Wednesday 01 April 2026 00:53:24 +0000 (0:00:01.003) 0:03:14.522 ******* 2026-04-01 00:55:44.094582 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release//memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-01 00:55:44.094589 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release//memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-01 00:55:44.094595 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release//memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-01 00:55:44.094600 | orchestrator | 2026-04-01 00:55:44.094606 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-04-01 00:55:44.094611 | orchestrator | Wednesday 01 April 2026 00:53:26 +0000 (0:00:01.786) 0:03:16.309 ******* 2026-04-01 00:55:44.094624 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release//memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-01 00:55:44.094631 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release//memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-01 00:55:44.094641 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:55:44.094647 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:55:44.094658 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release//memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-01 00:55:44.094664 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:55:44.094670 | orchestrator | 2026-04-01 00:55:44.094676 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-04-01 00:55:44.094682 | orchestrator | Wednesday 01 April 2026 00:53:26 +0000 (0:00:00.329) 0:03:16.638 ******* 2026-04-01 00:55:44.094688 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-04-01 00:55:44.094695 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-04-01 00:55:44.094717 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:55:44.094730 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:55:44.094737 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-04-01 00:55:44.094744 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:55:44.094750 | orchestrator | 2026-04-01 00:55:44.094764 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-04-01 00:55:44.094771 | orchestrator | Wednesday 01 April 2026 00:53:27 +0000 (0:00:00.561) 0:03:17.200 ******* 2026-04-01 00:55:44.094784 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:55:44.094792 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:55:44.094799 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:55:44.094814 | orchestrator | 2026-04-01 00:55:44.094828 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-04-01 00:55:44.094835 | orchestrator | Wednesday 01 April 2026 00:53:28 +0000 (0:00:00.777) 0:03:17.977 ******* 2026-04-01 00:55:44.094849 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:55:44.094856 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:55:44.094871 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:55:44.094878 | orchestrator | 2026-04-01 00:55:44.094892 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-04-01 00:55:44.094898 | orchestrator | Wednesday 01 April 2026 00:53:29 +0000 (0:00:01.264) 0:03:19.242 ******* 2026-04-01 00:55:44.094905 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:55:44.094923 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:55:44.094937 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:55:44.094944 | orchestrator | 2026-04-01 00:55:44.094950 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-04-01 00:55:44.094956 | orchestrator | Wednesday 01 April 2026 00:53:29 +0000 (0:00:00.297) 0:03:19.539 ******* 2026-04-01 00:55:44.094963 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:55:44.094969 | orchestrator | 2026-04-01 00:55:44.094976 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-04-01 00:55:44.094982 | orchestrator | Wednesday 01 April 2026 00:53:31 +0000 (0:00:01.274) 0:03:20.814 ******* 2026-04-01 00:55:44.094990 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release//neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-01 00:55:44.095004 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release//neutron-openvswitch-agent:26.0.3.20260328', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.095012 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release//neutron-dhcp-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release//neutron-dhcp-agent:26.0.3.20260328', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-04-01 00:55:44.095025 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release//neutron-l3-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release//neutron-l3-agent:26.0.3.20260328', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-04-01 00:55:44.095038 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release//neutron-sriov-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.095046 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release//neutron-mlnx-agent:26.0.3.20260328', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-01 00:55:44.095057 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release//neutron-eswitchd:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-01 00:55:44.095064 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release//neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-01 00:55:44.095072 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release//neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-01 00:55:44.095078 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release//neutron-bgp-dragent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.095097 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release//neutron-infoblox-ipam-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-04-01 00:55:44.095105 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release//neutron-metering-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-01 00:55:44.095115 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release//ironic-neutron-agent:26.0.3.20260328', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.095122 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release//neutron-tls-proxy:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-01 00:55:44.095129 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release//neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-01 00:55:44.095146 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release//neutron-ovn-agent:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-01 00:55:44.095153 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release//neutron-openvswitch-agent:26.0.3.20260328', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.095164 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release//neutron-dhcp-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release//neutron-dhcp-agent:26.0.3.20260328', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-04-01 00:55:44.095172 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release//neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-01 00:55:44.095182 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release//neutron-l3-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release//neutron-l3-agent:26.0.3.20260328', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-04-01 00:55:44.095199 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release//neutron-sriov-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.095206 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release//neutron-openvswitch-agent:26.0.3.20260328', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.095215 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release//neutron-mlnx-agent:26.0.3.20260328', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-01 00:55:44.095224 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release//neutron-dhcp-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release//neutron-dhcp-agent:26.0.3.20260328', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-04-01 00:55:44.095231 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release//neutron-eswitchd:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-01 00:55:44.095248 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release//neutron-l3-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release//neutron-l3-agent:26.0.3.20260328', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-04-01 00:55:44.095256 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release//neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-01 00:55:44.095267 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release//neutron-sriov-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.095274 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release//neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-01 00:55:44.095280 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release//neutron-mlnx-agent:26.0.3.20260328', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-01 00:55:44.095293 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release//neutron-bgp-dragent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.095304 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release//neutron-infoblox-ipam-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-04-01 00:55:44.095312 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release//neutron-eswitchd:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-01 00:55:44.095318 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release//neutron-metering-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-01 00:55:44.095328 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release//neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-01 00:55:44.095335 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release//ironic-neutron-agent:26.0.3.20260328', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.095342 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release//neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-01 00:55:44.095357 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release//neutron-tls-proxy:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-01 00:55:44.095365 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release//neutron-bgp-dragent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.095375 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release//neutron-ovn-agent:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-01 00:55:44.095382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release//neutron-infoblox-ipam-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-04-01 00:55:44.095409 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release//neutron-metering-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-01 00:55:44.095422 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release//ironic-neutron-agent:26.0.3.20260328', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.095434 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release//neutron-tls-proxy:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-01 00:55:44.095442 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release//neutron-ovn-agent:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-01 00:55:44.095449 | orchestrator | 2026-04-01 00:55:44.095457 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-04-01 00:55:44.095464 | orchestrator | Wednesday 01 April 2026 00:53:35 +0000 (0:00:04.615) 0:03:25.430 ******* 2026-04-01 00:55:44.095476 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release//neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-01 00:55:44.095484 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release//neutron-openvswitch-agent:26.0.3.20260328', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.095496 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release//neutron-dhcp-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release//neutron-dhcp-agent:26.0.3.20260328', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-04-01 00:55:44.095509 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release//neutron-l3-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release//neutron-l3-agent:26.0.3.20260328', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-04-01 00:55:44.095520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release//neutron-sriov-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.095527 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release//neutron-mlnx-agent:26.0.3.20260328', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-01 00:55:44.095534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release//neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-01 00:55:44.095545 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release//neutron-eswitchd:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-01 00:55:44.095556 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release//neutron-openvswitch-agent:26.0.3.20260328', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.095563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release//neutron-dhcp-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release//neutron-dhcp-agent:26.0.3.20260328', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-04-01 00:55:44.095574 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release//neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-01 00:55:44.095581 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release//neutron-l3-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release//neutron-l3-agent:26.0.3.20260328', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-04-01 00:55:44.095594 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release//neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-01 00:55:44.095605 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release//neutron-sriov-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.095612 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release//neutron-mlnx-agent:26.0.3.20260328', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-01 00:55:44.095620 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release//neutron-bgp-dragent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.095629 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release//neutron-eswitchd:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-01 00:55:44.095641 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release//neutron-infoblox-ipam-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-04-01 00:55:44.095649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release//neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-01 00:55:44.095656 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release//neutron-metering-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-01 00:55:44.095667 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release//neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in2026-04-01 00:55:44 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:55:44.095676 | orchestrator | 2026-04-01 00:55:44 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:55:44.095682 | orchestrator | _groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-01 00:55:44.095690 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release//ironic-neutron-agent:26.0.3.20260328', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.095701 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release//neutron-bgp-dragent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.095713 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release//neutron-tls-proxy:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-01 00:55:44.095720 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release//neutron-infoblox-ipam-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-04-01 00:55:44.095732 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release//neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-01 00:55:44.095740 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release//neutron-ovn-agent:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-01 00:55:44.095747 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:55:44.095759 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release//neutron-metering-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-01 00:55:44.095771 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release//neutron-openvswitch-agent:26.0.3.20260328', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.095778 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release//ironic-neutron-agent:26.0.3.20260328', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.095932 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release//neutron-dhcp-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release//neutron-dhcp-agent:26.0.3.20260328', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-04-01 00:55:44.095952 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release//neutron-tls-proxy:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-01 00:55:44.095965 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release//neutron-l3-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release//neutron-l3-agent:26.0.3.20260328', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-04-01 00:55:44.095981 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release//neutron-ovn-agent:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-01 00:55:44.095989 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:55:44.096002 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release//neutron-sriov-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.096010 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release//neutron-mlnx-agent:26.0.3.20260328', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-01 00:55:44.096017 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release//neutron-eswitchd:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-01 00:55:44.096024 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release//neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-01 00:55:44.096040 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release//neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-01 00:55:44.096048 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release//neutron-bgp-dragent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.096056 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release//neutron-infoblox-ipam-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-04-01 00:55:44.096066 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release//neutron-metering-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-01 00:55:44.096073 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release//ironic-neutron-agent:26.0.3.20260328', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.096080 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release//neutron-tls-proxy:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-01 00:55:44.096093 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release//neutron-ovn-agent:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-01 00:55:44.096101 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:55:44.096107 | orchestrator | 2026-04-01 00:55:44.096115 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-04-01 00:55:44.096122 | orchestrator | Wednesday 01 April 2026 00:53:37 +0000 (0:00:01.606) 0:03:27.037 ******* 2026-04-01 00:55:44.096129 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-04-01 00:55:44.096139 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-04-01 00:55:44.096145 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:55:44.096152 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-04-01 00:55:44.096158 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-04-01 00:55:44.096165 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:55:44.096171 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-04-01 00:55:44.096181 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-04-01 00:55:44.096188 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:55:44.096194 | orchestrator | 2026-04-01 00:55:44.096200 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-04-01 00:55:44.096207 | orchestrator | Wednesday 01 April 2026 00:53:38 +0000 (0:00:01.405) 0:03:28.442 ******* 2026-04-01 00:55:44.096213 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:55:44.096219 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:55:44.096225 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:55:44.096231 | orchestrator | 2026-04-01 00:55:44.096368 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-04-01 00:55:44.096449 | orchestrator | Wednesday 01 April 2026 00:53:40 +0000 (0:00:01.654) 0:03:30.096 ******* 2026-04-01 00:55:44.096458 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:55:44.096464 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:55:44.096470 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:55:44.096485 | orchestrator | 2026-04-01 00:55:44.096491 | orchestrator | TASK [include_role : placement] ************************************************ 2026-04-01 00:55:44.096497 | orchestrator | Wednesday 01 April 2026 00:53:42 +0000 (0:00:02.112) 0:03:32.209 ******* 2026-04-01 00:55:44.096503 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:55:44.096509 | orchestrator | 2026-04-01 00:55:44.096515 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-04-01 00:55:44.096522 | orchestrator | Wednesday 01 April 2026 00:53:43 +0000 (0:00:01.136) 0:03:33.346 ******* 2026-04-01 00:55:44.096534 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release//placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-01 00:55:44.096544 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release//placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-01 00:55:44.096563 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release//placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-01 00:55:44.096570 | orchestrator | 2026-04-01 00:55:44.096576 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-04-01 00:55:44.096582 | orchestrator | Wednesday 01 April 2026 00:53:46 +0000 (0:00:03.270) 0:03:36.616 ******* 2026-04-01 00:55:44.096595 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release//placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-01 00:55:44.096602 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:55:44.096612 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release//placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-01 00:55:44.096619 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:55:44.096625 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release//placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-01 00:55:44.096633 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:55:44.096639 | orchestrator | 2026-04-01 00:55:44.096646 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-04-01 00:55:44.096652 | orchestrator | Wednesday 01 April 2026 00:53:47 +0000 (0:00:01.109) 0:03:37.725 ******* 2026-04-01 00:55:44.096664 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-01 00:55:44.096671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-01 00:55:44.096682 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:55:44.096688 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-01 00:55:44.096694 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-01 00:55:44.096700 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:55:44.096706 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-01 00:55:44.096713 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-01 00:55:44.096720 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:55:44.096726 | orchestrator | 2026-04-01 00:55:44.096732 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-04-01 00:55:44.096739 | orchestrator | Wednesday 01 April 2026 00:53:48 +0000 (0:00:00.736) 0:03:38.462 ******* 2026-04-01 00:55:44.096747 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:55:44.096755 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:55:44.096763 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:55:44.096771 | orchestrator | 2026-04-01 00:55:44.096779 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-04-01 00:55:44.096787 | orchestrator | Wednesday 01 April 2026 00:53:49 +0000 (0:00:01.234) 0:03:39.697 ******* 2026-04-01 00:55:44.096794 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:55:44.096801 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:55:44.096809 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:55:44.096816 | orchestrator | 2026-04-01 00:55:44.096826 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-04-01 00:55:44.096832 | orchestrator | Wednesday 01 April 2026 00:53:52 +0000 (0:00:02.152) 0:03:41.849 ******* 2026-04-01 00:55:44.096838 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:55:44.096844 | orchestrator | 2026-04-01 00:55:44.096849 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-04-01 00:55:44.096855 | orchestrator | Wednesday 01 April 2026 00:53:53 +0000 (0:00:01.494) 0:03:43.344 ******* 2026-04-01 00:55:44.096862 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release//nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-01 00:55:44.096878 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release//nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-01 00:55:44.096885 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release//nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-01 00:55:44.096895 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release//nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-01 00:55:44.096903 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release//nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.096914 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release//nova-super-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.096926 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release//nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-01 00:55:44.096933 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release//nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.096943 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release//nova-super-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.096950 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release//nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-01 00:55:44.096962 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release//nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.096973 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release//nova-super-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.096980 | orchestrator | 2026-04-01 00:55:44.096986 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-04-01 00:55:44.096993 | orchestrator | Wednesday 01 April 2026 00:53:59 +0000 (0:00:05.524) 0:03:48.869 ******* 2026-04-01 00:55:44.097000 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release//nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-01 00:55:44.097010 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release//nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-01 00:55:44.097018 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release//nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.097030 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release//nova-super-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.097037 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:55:44.097047 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release//nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-01 00:55:44.097054 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release//nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-01 00:55:44.097068 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release//nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.097075 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release//nova-super-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.097087 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:55:44.097098 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release//nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-01 00:55:44.097106 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release//nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-01 00:55:44.097113 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release//nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.097123 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release//nova-super-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.097134 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:55:44.097141 | orchestrator | 2026-04-01 00:55:44.097147 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-04-01 00:55:44.097154 | orchestrator | Wednesday 01 April 2026 00:53:59 +0000 (0:00:00.777) 0:03:49.646 ******* 2026-04-01 00:55:44.097162 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-01 00:55:44.097170 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-01 00:55:44.097177 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-01 00:55:44.097184 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-01 00:55:44.097191 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:55:44.097202 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-01 00:55:44.097209 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-01 00:55:44.097216 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-01 00:55:44.097223 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-01 00:55:44.097230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-01 00:55:44.097237 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:55:44.097244 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-01 00:55:44.097252 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-01 00:55:44.097259 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-01 00:55:44.097266 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:55:44.097272 | orchestrator | 2026-04-01 00:55:44.097278 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-04-01 00:55:44.097290 | orchestrator | Wednesday 01 April 2026 00:54:01 +0000 (0:00:01.690) 0:03:51.337 ******* 2026-04-01 00:55:44.097301 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:55:44.097308 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:55:44.097317 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:55:44.097326 | orchestrator | 2026-04-01 00:55:44.097333 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-04-01 00:55:44.097340 | orchestrator | Wednesday 01 April 2026 00:54:03 +0000 (0:00:01.447) 0:03:52.785 ******* 2026-04-01 00:55:44.097346 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:55:44.097352 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:55:44.097359 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:55:44.097365 | orchestrator | 2026-04-01 00:55:44.097370 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-04-01 00:55:44.097376 | orchestrator | Wednesday 01 April 2026 00:54:05 +0000 (0:00:02.405) 0:03:55.191 ******* 2026-04-01 00:55:44.097382 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:55:44.097388 | orchestrator | 2026-04-01 00:55:44.097412 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-04-01 00:55:44.097419 | orchestrator | Wednesday 01 April 2026 00:54:06 +0000 (0:00:01.322) 0:03:56.513 ******* 2026-04-01 00:55:44.097426 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-04-01 00:55:44.097432 | orchestrator | 2026-04-01 00:55:44.097439 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-04-01 00:55:44.097446 | orchestrator | Wednesday 01 April 2026 00:54:07 +0000 (0:00:01.088) 0:03:57.602 ******* 2026-04-01 00:55:44.097453 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-04-01 00:55:44.097468 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-04-01 00:55:44.097475 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-04-01 00:55:44.097482 | orchestrator | 2026-04-01 00:55:44.097488 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-04-01 00:55:44.097496 | orchestrator | Wednesday 01 April 2026 00:54:11 +0000 (0:00:04.000) 0:04:01.603 ******* 2026-04-01 00:55:44.097503 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-01 00:55:44.097516 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:55:44.097522 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-01 00:55:44.097529 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:55:44.097539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-01 00:55:44.097546 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:55:44.097552 | orchestrator | 2026-04-01 00:55:44.097558 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-04-01 00:55:44.097565 | orchestrator | Wednesday 01 April 2026 00:54:13 +0000 (0:00:01.282) 0:04:02.885 ******* 2026-04-01 00:55:44.097572 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-01 00:55:44.097579 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-01 00:55:44.097586 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:55:44.097592 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-01 00:55:44.097599 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-01 00:55:44.097605 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:55:44.097612 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-01 00:55:44.097623 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-01 00:55:44.097629 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:55:44.097636 | orchestrator | 2026-04-01 00:55:44.097642 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-04-01 00:55:44.097648 | orchestrator | Wednesday 01 April 2026 00:54:14 +0000 (0:00:01.581) 0:04:04.467 ******* 2026-04-01 00:55:44.097655 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:55:44.097661 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:55:44.097668 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:55:44.097674 | orchestrator | 2026-04-01 00:55:44.097681 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-04-01 00:55:44.097692 | orchestrator | Wednesday 01 April 2026 00:54:17 +0000 (0:00:02.715) 0:04:07.182 ******* 2026-04-01 00:55:44.097699 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:55:44.097706 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:55:44.097713 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:55:44.097720 | orchestrator | 2026-04-01 00:55:44.097728 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-04-01 00:55:44.097734 | orchestrator | Wednesday 01 April 2026 00:54:20 +0000 (0:00:02.985) 0:04:10.168 ******* 2026-04-01 00:55:44.097741 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-04-01 00:55:44.097748 | orchestrator | 2026-04-01 00:55:44.097755 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-04-01 00:55:44.097762 | orchestrator | Wednesday 01 April 2026 00:54:21 +0000 (0:00:00.706) 0:04:10.874 ******* 2026-04-01 00:55:44.097769 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-01 00:55:44.097777 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:55:44.097789 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-01 00:55:44.097797 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:55:44.097803 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-01 00:55:44.097810 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:55:44.097816 | orchestrator | 2026-04-01 00:55:44.097822 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-04-01 00:55:44.097829 | orchestrator | Wednesday 01 April 2026 00:54:22 +0000 (0:00:01.138) 0:04:12.013 ******* 2026-04-01 00:55:44.097835 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-01 00:55:44.097842 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:55:44.097853 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-01 00:55:44.097866 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:55:44.097874 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-01 00:55:44.097881 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:55:44.097888 | orchestrator | 2026-04-01 00:55:44.097894 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-04-01 00:55:44.097901 | orchestrator | Wednesday 01 April 2026 00:54:23 +0000 (0:00:01.087) 0:04:13.100 ******* 2026-04-01 00:55:44.097907 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:55:44.097913 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:55:44.097920 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:55:44.097926 | orchestrator | 2026-04-01 00:55:44.097933 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-04-01 00:55:44.097939 | orchestrator | Wednesday 01 April 2026 00:54:24 +0000 (0:00:01.204) 0:04:14.304 ******* 2026-04-01 00:55:44.097945 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:55:44.097952 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:55:44.097958 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:55:44.097964 | orchestrator | 2026-04-01 00:55:44.097970 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-04-01 00:55:44.097976 | orchestrator | Wednesday 01 April 2026 00:54:26 +0000 (0:00:02.116) 0:04:16.421 ******* 2026-04-01 00:55:44.097983 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:55:44.097989 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:55:44.097996 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:55:44.098002 | orchestrator | 2026-04-01 00:55:44.098008 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-04-01 00:55:44.098064 | orchestrator | Wednesday 01 April 2026 00:54:29 +0000 (0:00:02.971) 0:04:19.393 ******* 2026-04-01 00:55:44.098075 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-04-01 00:55:44.098083 | orchestrator | 2026-04-01 00:55:44.098090 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-04-01 00:55:44.098097 | orchestrator | Wednesday 01 April 2026 00:54:30 +0000 (0:00:01.296) 0:04:20.689 ******* 2026-04-01 00:55:44.098110 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-01 00:55:44.098119 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:55:44.098127 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-01 00:55:44.098140 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:55:44.098147 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-01 00:55:44.098156 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:55:44.098163 | orchestrator | 2026-04-01 00:55:44.098170 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-04-01 00:55:44.098177 | orchestrator | Wednesday 01 April 2026 00:54:32 +0000 (0:00:01.194) 0:04:21.884 ******* 2026-04-01 00:55:44.098195 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-01 00:55:44.098202 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:55:44.098208 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-01 00:55:44.098215 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:55:44.098222 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-01 00:55:44.098228 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:55:44.098234 | orchestrator | 2026-04-01 00:55:44.098240 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-04-01 00:55:44.098245 | orchestrator | Wednesday 01 April 2026 00:54:33 +0000 (0:00:01.214) 0:04:23.099 ******* 2026-04-01 00:55:44.098252 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:55:44.098258 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:55:44.098263 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:55:44.098270 | orchestrator | 2026-04-01 00:55:44.098276 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-04-01 00:55:44.098282 | orchestrator | Wednesday 01 April 2026 00:54:34 +0000 (0:00:01.516) 0:04:24.615 ******* 2026-04-01 00:55:44.098289 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:55:44.098296 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:55:44.098304 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:55:44.098310 | orchestrator | 2026-04-01 00:55:44.098321 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-04-01 00:55:44.098328 | orchestrator | Wednesday 01 April 2026 00:54:37 +0000 (0:00:02.277) 0:04:26.893 ******* 2026-04-01 00:55:44.098341 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:55:44.098347 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:55:44.098354 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:55:44.098360 | orchestrator | 2026-04-01 00:55:44.098367 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-04-01 00:55:44.098375 | orchestrator | Wednesday 01 April 2026 00:54:40 +0000 (0:00:02.951) 0:04:29.844 ******* 2026-04-01 00:55:44.098381 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:55:44.098389 | orchestrator | 2026-04-01 00:55:44.098415 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-04-01 00:55:44.098421 | orchestrator | Wednesday 01 April 2026 00:54:41 +0000 (0:00:01.182) 0:04:31.027 ******* 2026-04-01 00:55:44.098429 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-01 00:55:44.098451 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-01 00:55:44.098458 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-01 00:55:44.098468 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-01 00:55:44.098475 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.098496 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-01 00:55:44.098504 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-01 00:55:44.098518 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-01 00:55:44.098526 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-01 00:55:44.098533 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.098544 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-01 00:55:44.098557 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-01 00:55:44.098564 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-01 00:55:44.098574 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-01 00:55:44.098581 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.098586 | orchestrator | 2026-04-01 00:55:44.098592 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-04-01 00:55:44.098597 | orchestrator | Wednesday 01 April 2026 00:54:44 +0000 (0:00:03.544) 0:04:34.572 ******* 2026-04-01 00:55:44.098603 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-01 00:55:44.098618 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-01 00:55:44.098624 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-01 00:55:44.098630 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-01 00:55:44.098641 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.098647 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:55:44.098653 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-01 00:55:44.098664 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-01 00:55:44.098674 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-01 00:55:44.098681 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-01 00:55:44.098688 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.098695 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:55:44.098707 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-01 00:55:44.098713 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-01 00:55:44.098724 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-01 00:55:44.098736 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-01 00:55:44.098743 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-01 00:55:44.098749 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:55:44.098755 | orchestrator | 2026-04-01 00:55:44.098762 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-04-01 00:55:44.098768 | orchestrator | Wednesday 01 April 2026 00:54:45 +0000 (0:00:00.634) 0:04:35.206 ******* 2026-04-01 00:55:44.098776 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-01 00:55:44.098783 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-01 00:55:44.098790 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:55:44.098797 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-01 00:55:44.098809 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-01 00:55:44.098816 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:55:44.098822 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-01 00:55:44.098829 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-01 00:55:44.098842 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:55:44.098848 | orchestrator | 2026-04-01 00:55:44.098854 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-04-01 00:55:44.098860 | orchestrator | Wednesday 01 April 2026 00:54:46 +0000 (0:00:00.815) 0:04:36.021 ******* 2026-04-01 00:55:44.098866 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:55:44.098872 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:55:44.098878 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:55:44.098884 | orchestrator | 2026-04-01 00:55:44.098890 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-04-01 00:55:44.098895 | orchestrator | Wednesday 01 April 2026 00:54:47 +0000 (0:00:01.340) 0:04:37.362 ******* 2026-04-01 00:55:44.098902 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:55:44.098909 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:55:44.098915 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:55:44.098922 | orchestrator | 2026-04-01 00:55:44.098929 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-04-01 00:55:44.098936 | orchestrator | Wednesday 01 April 2026 00:54:49 +0000 (0:00:02.040) 0:04:39.403 ******* 2026-04-01 00:55:44.098944 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:55:44.098951 | orchestrator | 2026-04-01 00:55:44.098958 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-04-01 00:55:44.098966 | orchestrator | Wednesday 01 April 2026 00:54:51 +0000 (0:00:01.426) 0:04:40.829 ******* 2026-04-01 00:55:44.098978 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-01 00:55:44.098988 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-01 00:55:44.099001 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-01 00:55:44.099014 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release//opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-01 00:55:44.099025 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release//opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-01 00:55:44.099034 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release//opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-01 00:55:44.099041 | orchestrator | 2026-04-01 00:55:44.099048 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-04-01 00:55:44.099059 | orchestrator | Wednesday 01 April 2026 00:54:55 +0000 (0:00:04.796) 0:04:45.625 ******* 2026-04-01 00:55:44.099072 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-01 00:55:44.099080 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release//opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-01 00:55:44.099087 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:55:44.099099 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-01 00:55:44.099107 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release//opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-01 00:55:44.099118 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:55:44.099130 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-01 00:55:44.099137 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release//opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-01 00:55:44.099145 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:55:44.099152 | orchestrator | 2026-04-01 00:55:44.099162 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-04-01 00:55:44.099169 | orchestrator | Wednesday 01 April 2026 00:54:56 +0000 (0:00:01.067) 0:04:46.693 ******* 2026-04-01 00:55:44.099176 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-04-01 00:55:44.099184 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-04-01 00:55:44.099192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-04-01 00:55:44.099198 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:55:44.099206 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-04-01 00:55:44.099219 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-04-01 00:55:44.099226 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-04-01 00:55:44.099233 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:55:44.099244 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-04-01 00:55:44.099252 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-04-01 00:55:44.099259 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-04-01 00:55:44.099266 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:55:44.099274 | orchestrator | 2026-04-01 00:55:44.099281 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-04-01 00:55:44.099288 | orchestrator | Wednesday 01 April 2026 00:54:58 +0000 (0:00:01.271) 0:04:47.964 ******* 2026-04-01 00:55:44.099295 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:55:44.099302 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:55:44.099309 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:55:44.099316 | orchestrator | 2026-04-01 00:55:44.099323 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-04-01 00:55:44.099329 | orchestrator | Wednesday 01 April 2026 00:54:58 +0000 (0:00:00.409) 0:04:48.374 ******* 2026-04-01 00:55:44.099336 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:55:44.099342 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:55:44.099348 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:55:44.099354 | orchestrator | 2026-04-01 00:55:44.099360 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-04-01 00:55:44.099366 | orchestrator | Wednesday 01 April 2026 00:54:59 +0000 (0:00:01.243) 0:04:49.617 ******* 2026-04-01 00:55:44.099372 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:55:44.099379 | orchestrator | 2026-04-01 00:55:44.099385 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-04-01 00:55:44.099447 | orchestrator | Wednesday 01 April 2026 00:55:01 +0000 (0:00:01.521) 0:04:51.139 ******* 2026-04-01 00:55:44.099463 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-01 00:55:44.099477 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-01 00:55:44.099489 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-01 00:55:44.099496 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-01 00:55:44.099503 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:55:44.099510 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:55:44.099521 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:55:44.099534 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-01 00:55:44.099541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:55:44.099552 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-01 00:55:44.099559 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-01 00:55:44.099566 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-01 00:55:44.099576 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:55:44.099587 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:55:44.099594 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-01 00:55:44.099606 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-01 00:55:44.099614 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release//prometheus-openstack-exporter:1.7.0.20260328', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-04-01 00:55:44.099620 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:55:44.099630 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-01 00:55:44.099644 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:55:44.099652 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-01 00:55:44.099665 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release//prometheus-openstack-exporter:1.7.0.20260328', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-04-01 00:55:44.099672 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:55:44.099679 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:55:44.099705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-01 00:55:44.099712 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-01 00:55:44.099724 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release//prometheus-openstack-exporter:1.7.0.20260328', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-04-01 00:55:44.099731 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:55:44.099739 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:55:44.099746 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-01 00:55:44.099757 | orchestrator | 2026-04-01 00:55:44.099764 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-04-01 00:55:44.099770 | orchestrator | Wednesday 01 April 2026 00:55:05 +0000 (0:00:04.166) 0:04:55.305 ******* 2026-04-01 00:55:44.099783 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-04-01 00:55:44.099792 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-01 00:55:44.099805 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:55:44.099812 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:55:44.099819 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-01 00:55:44.099830 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-04-01 00:55:44.099845 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release//prometheus-openstack-exporter:1.7.0.20260328', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-04-01 00:55:44.099851 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:55:44.099961 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:55:44.099976 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-01 00:55:44.099983 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:55:44.099990 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-04-01 00:55:44.100006 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-04-01 00:55:44.100015 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-01 00:55:44.100022 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-01 00:55:44.100068 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:55:44.100077 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:55:44.100083 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:55:44.100095 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:55:44.100106 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-01 00:55:44.100114 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-01 00:55:44.100127 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-04-01 00:55:44.100135 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-04-01 00:55:44.100149 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release//prometheus-openstack-exporter:1.7.0.20260328', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-04-01 00:55:44.100159 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release//prometheus-openstack-exporter:1.7.0.20260328', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-04-01 00:55:44.100166 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:55:44.100174 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:55:44.100186 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:55:44.100193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 00:55:44.100206 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-01 00:55:44.100213 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:55:44.100220 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-01 00:55:44.100227 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:55:44.100234 | orchestrator | 2026-04-01 00:55:44.100244 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-04-01 00:55:44.100250 | orchestrator | Wednesday 01 April 2026 00:55:06 +0000 (0:00:00.951) 0:04:56.257 ******* 2026-04-01 00:55:44.100256 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-04-01 00:55:44.100262 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-04-01 00:55:44.100271 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-04-01 00:55:44.100279 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-04-01 00:55:44.100286 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:55:44.100297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-04-01 00:55:44.100304 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-04-01 00:55:44.100317 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-04-01 00:55:44.100324 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-04-01 00:55:44.100330 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:55:44.100337 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-04-01 00:55:44.100344 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-04-01 00:55:44.100356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-04-01 00:55:44.100363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-04-01 00:55:44.100369 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:55:44.100375 | orchestrator | 2026-04-01 00:55:44.100381 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-04-01 00:55:44.100388 | orchestrator | Wednesday 01 April 2026 00:55:07 +0000 (0:00:01.135) 0:04:57.393 ******* 2026-04-01 00:55:44.100500 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:55:44.100508 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:55:44.100514 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:55:44.100520 | orchestrator | 2026-04-01 00:55:44.100527 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-04-01 00:55:44.100533 | orchestrator | Wednesday 01 April 2026 00:55:08 +0000 (0:00:00.407) 0:04:57.801 ******* 2026-04-01 00:55:44.100540 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:55:44.100547 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:55:44.100554 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:55:44.100562 | orchestrator | 2026-04-01 00:55:44.100570 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-04-01 00:55:44.100578 | orchestrator | Wednesday 01 April 2026 00:55:09 +0000 (0:00:01.161) 0:04:58.962 ******* 2026-04-01 00:55:44.100585 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:55:44.100593 | orchestrator | 2026-04-01 00:55:44.100600 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-04-01 00:55:44.100605 | orchestrator | Wednesday 01 April 2026 00:55:10 +0000 (0:00:01.310) 0:05:00.273 ******* 2026-04-01 00:55:44.100627 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release//rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-01 00:55:44.100635 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release//rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-01 00:55:44.100647 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release//rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-01 00:55:44.100654 | orchestrator | 2026-04-01 00:55:44.100660 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-04-01 00:55:44.100667 | orchestrator | Wednesday 01 April 2026 00:55:12 +0000 (0:00:02.411) 0:05:02.684 ******* 2026-04-01 00:55:44.100673 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release//rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-01 00:55:44.100688 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:55:44.100699 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release//rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-01 00:55:44.100706 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:55:44.100713 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release//rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-01 00:55:44.100720 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:55:44.100726 | orchestrator | 2026-04-01 00:55:44.100732 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-04-01 00:55:44.100739 | orchestrator | Wednesday 01 April 2026 00:55:13 +0000 (0:00:00.392) 0:05:03.076 ******* 2026-04-01 00:55:44.100746 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-04-01 00:55:44.100755 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:55:44.100765 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-04-01 00:55:44.100772 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:55:44.100779 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-04-01 00:55:44.100786 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:55:44.100793 | orchestrator | 2026-04-01 00:55:44.100799 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-04-01 00:55:44.100805 | orchestrator | Wednesday 01 April 2026 00:55:13 +0000 (0:00:00.576) 0:05:03.653 ******* 2026-04-01 00:55:44.100812 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:55:44.100818 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:55:44.100825 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:55:44.100832 | orchestrator | 2026-04-01 00:55:44.100839 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-04-01 00:55:44.100851 | orchestrator | Wednesday 01 April 2026 00:55:14 +0000 (0:00:00.441) 0:05:04.094 ******* 2026-04-01 00:55:44.100858 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:55:44.100865 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:55:44.100871 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:55:44.100877 | orchestrator | 2026-04-01 00:55:44.100883 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-04-01 00:55:44.100890 | orchestrator | Wednesday 01 April 2026 00:55:15 +0000 (0:00:01.276) 0:05:05.371 ******* 2026-04-01 00:55:44.100895 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:55:44.100903 | orchestrator | 2026-04-01 00:55:44.100910 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-04-01 00:55:44.100917 | orchestrator | Wednesday 01 April 2026 00:55:17 +0000 (0:00:01.552) 0:05:06.924 ******* 2026-04-01 00:55:44.100930 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-01 00:55:44.100939 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-01 00:55:44.100952 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-01 00:55:44.100964 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-01 00:55:44.100976 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-01 00:55:44.100984 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-01 00:55:44.100990 | orchestrator | 2026-04-01 00:55:44.100997 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-04-01 00:55:44.101003 | orchestrator | Wednesday 01 April 2026 00:55:22 +0000 (0:00:05.685) 0:05:12.610 ******* 2026-04-01 00:55:44.101013 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-04-01 00:55:44.101024 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-01 00:55:44.101032 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:55:44.101044 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-04-01 00:55:44.101052 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-01 00:55:44.101058 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:55:44.101069 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-04-01 00:55:44.101080 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-01 00:55:44.101087 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:55:44.101093 | orchestrator | 2026-04-01 00:55:44.101099 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-04-01 00:55:44.101106 | orchestrator | Wednesday 01 April 2026 00:55:23 +0000 (0:00:01.023) 0:05:13.634 ******* 2026-04-01 00:55:44.101117 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-04-01 00:55:44.101125 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-04-01 00:55:44.101133 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-01 00:55:44.101142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-01 00:55:44.101149 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:55:44.101156 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-04-01 00:55:44.101162 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-04-01 00:55:44.101173 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-01 00:55:44.101184 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-01 00:55:44.101190 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:55:44.101197 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-04-01 00:55:44.101202 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-04-01 00:55:44.101208 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-01 00:55:44.101214 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-01 00:55:44.101221 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:55:44.101227 | orchestrator | 2026-04-01 00:55:44.101233 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-04-01 00:55:44.101239 | orchestrator | Wednesday 01 April 2026 00:55:25 +0000 (0:00:01.337) 0:05:14.971 ******* 2026-04-01 00:55:44.101246 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:55:44.101252 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:55:44.101258 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:55:44.101264 | orchestrator | 2026-04-01 00:55:44.101270 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-04-01 00:55:44.101276 | orchestrator | Wednesday 01 April 2026 00:55:26 +0000 (0:00:01.119) 0:05:16.090 ******* 2026-04-01 00:55:44.101282 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:55:44.101288 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:55:44.101294 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:55:44.101301 | orchestrator | 2026-04-01 00:55:44.101306 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-04-01 00:55:44.101317 | orchestrator | Wednesday 01 April 2026 00:55:28 +0000 (0:00:02.051) 0:05:18.142 ******* 2026-04-01 00:55:44.101324 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:55:44.101331 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:55:44.101338 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:55:44.101344 | orchestrator | 2026-04-01 00:55:44.101351 | orchestrator | TASK [include_role : trove] **************************************************** 2026-04-01 00:55:44.101358 | orchestrator | Wednesday 01 April 2026 00:55:28 +0000 (0:00:00.280) 0:05:18.422 ******* 2026-04-01 00:55:44.101363 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:55:44.101369 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:55:44.101375 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:55:44.101380 | orchestrator | 2026-04-01 00:55:44.101386 | orchestrator | TASK [include_role : venus] **************************************************** 2026-04-01 00:55:44.101413 | orchestrator | Wednesday 01 April 2026 00:55:29 +0000 (0:00:00.475) 0:05:18.898 ******* 2026-04-01 00:55:44.101419 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:55:44.101425 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:55:44.101438 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:55:44.101446 | orchestrator | 2026-04-01 00:55:44.101451 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-04-01 00:55:44.101457 | orchestrator | Wednesday 01 April 2026 00:55:29 +0000 (0:00:00.283) 0:05:19.181 ******* 2026-04-01 00:55:44.101464 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:55:44.101470 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:55:44.101477 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:55:44.101484 | orchestrator | 2026-04-01 00:55:44.101491 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-04-01 00:55:44.101498 | orchestrator | Wednesday 01 April 2026 00:55:29 +0000 (0:00:00.260) 0:05:19.442 ******* 2026-04-01 00:55:44.101505 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:55:44.101512 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:55:44.101519 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:55:44.101526 | orchestrator | 2026-04-01 00:55:44.101533 | orchestrator | TASK [include_role : loadbalancer] ********************************************* 2026-04-01 00:55:44.101540 | orchestrator | Wednesday 01 April 2026 00:55:29 +0000 (0:00:00.266) 0:05:19.708 ******* 2026-04-01 00:55:44.101547 | orchestrator | included: loadbalancer for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:55:44.101554 | orchestrator | 2026-04-01 00:55:44.101561 | orchestrator | TASK [service-check-containers : loadbalancer | Check containers] ************** 2026-04-01 00:55:44.101568 | orchestrator | Wednesday 01 April 2026 00:55:31 +0000 (0:00:01.536) 0:05:21.244 ******* 2026-04-01 00:55:44.101581 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-01 00:55:44.101590 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-01 00:55:44.101598 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-01 00:55:44.101611 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-01 00:55:44.101625 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-01 00:55:44.101632 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-01 00:55:44.101640 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-01 00:55:44.101651 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-01 00:55:44.101659 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-01 00:55:44.101666 | orchestrator | 2026-04-01 00:55:44.101673 | orchestrator | TASK [service-check-containers : loadbalancer | Notify handlers to restart containers] *** 2026-04-01 00:55:44.101681 | orchestrator | Wednesday 01 April 2026 00:55:33 +0000 (0:00:02.465) 0:05:23.710 ******* 2026-04-01 00:55:44.101688 | orchestrator | changed: [testbed-node-0] => { 2026-04-01 00:55:44.101695 | orchestrator |  "msg": "Notifying handlers" 2026-04-01 00:55:44.101702 | orchestrator | } 2026-04-01 00:55:44.101709 | orchestrator | changed: [testbed-node-1] => { 2026-04-01 00:55:44.101716 | orchestrator |  "msg": "Notifying handlers" 2026-04-01 00:55:44.101723 | orchestrator | } 2026-04-01 00:55:44.101730 | orchestrator | changed: [testbed-node-2] => { 2026-04-01 00:55:44.101738 | orchestrator |  "msg": "Notifying handlers" 2026-04-01 00:55:44.101745 | orchestrator | } 2026-04-01 00:55:44.101760 | orchestrator | 2026-04-01 00:55:44.101767 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-01 00:55:44.101774 | orchestrator | Wednesday 01 April 2026 00:55:34 +0000 (0:00:00.303) 0:05:24.014 ******* 2026-04-01 00:55:44.101788 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-01 00:55:44.101796 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-01 00:55:44.101803 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-01 00:55:44.101810 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:55:44.101817 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-01 00:55:44.101829 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-01 00:55:44.101837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-01 00:55:44.101849 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:55:44.101856 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-01 00:55:44.101869 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-01 00:55:44.101877 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-01 00:55:44.101884 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:55:44.101891 | orchestrator | 2026-04-01 00:55:44.101898 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-04-01 00:55:44.101905 | orchestrator | Wednesday 01 April 2026 00:55:35 +0000 (0:00:01.444) 0:05:25.458 ******* 2026-04-01 00:55:44.101912 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:55:44.101920 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:55:44.101927 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:55:44.101934 | orchestrator | 2026-04-01 00:55:44.101941 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-04-01 00:55:44.101948 | orchestrator | Wednesday 01 April 2026 00:55:36 +0000 (0:00:00.852) 0:05:26.310 ******* 2026-04-01 00:55:44.101955 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:55:44.101962 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:55:44.101968 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:55:44.101975 | orchestrator | 2026-04-01 00:55:44.101982 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-04-01 00:55:44.101989 | orchestrator | Wednesday 01 April 2026 00:55:36 +0000 (0:00:00.304) 0:05:26.615 ******* 2026-04-01 00:55:44.101996 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:55:44.102003 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:55:44.102009 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:55:44.102065 | orchestrator | 2026-04-01 00:55:44.102075 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-04-01 00:55:44.102082 | orchestrator | Wednesday 01 April 2026 00:55:37 +0000 (0:00:00.966) 0:05:27.581 ******* 2026-04-01 00:55:44.102090 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:55:44.102097 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:55:44.102104 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:55:44.102112 | orchestrator | 2026-04-01 00:55:44.102123 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-04-01 00:55:44.102131 | orchestrator | Wednesday 01 April 2026 00:55:38 +0000 (0:00:00.957) 0:05:28.539 ******* 2026-04-01 00:55:44.102143 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:55:44.102151 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:55:44.102158 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:55:44.102165 | orchestrator | 2026-04-01 00:55:44.102172 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-04-01 00:55:44.102180 | orchestrator | Wednesday 01 April 2026 00:55:39 +0000 (0:00:01.150) 0:05:29.690 ******* 2026-04-01 00:55:44.102196 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=2.8.16.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fhaproxy\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_yp9d_9h5/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_yp9d_9h5/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_yp9d_9h5/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_yp9d_9h5/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=2.8.16.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fhaproxy: Bad Request (\"invalid reference format\")\\n'"} 2026-04-01 00:55:44.102212 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=2.8.16.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fhaproxy\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_sxe719yn/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_sxe719yn/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_sxe719yn/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_sxe719yn/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=2.8.16.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fhaproxy: Bad Request (\"invalid reference format\")\\n'"} 2026-04-01 00:55:44.102232 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=2.8.16.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fhaproxy\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_s2omtl35/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_s2omtl35/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_s2omtl35/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_s2omtl35/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=2.8.16.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fhaproxy: Bad Request (\"invalid reference format\")\\n'"} 2026-04-01 00:55:44.102240 | orchestrator | 2026-04-01 00:55:44.102247 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 00:55:44.102256 | orchestrator | testbed-node-0 : ok=120  changed=76  unreachable=0 failed=1  skipped=88  rescued=0 ignored=0 2026-04-01 00:55:44.102263 | orchestrator | testbed-node-1 : ok=119  changed=76  unreachable=0 failed=1  skipped=88  rescued=0 ignored=0 2026-04-01 00:55:44.102279 | orchestrator | testbed-node-2 : ok=119  changed=76  unreachable=0 failed=1  skipped=88  rescued=0 ignored=0 2026-04-01 00:55:44.102287 | orchestrator | 2026-04-01 00:55:44.102294 | orchestrator | 2026-04-01 00:55:44.102302 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 00:55:44.102309 | orchestrator | Wednesday 01 April 2026 00:55:42 +0000 (0:00:02.415) 0:05:32.105 ******* 2026-04-01 00:55:44.102316 | orchestrator | =============================================================================== 2026-04-01 00:55:44.102323 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 5.71s 2026-04-01 00:55:44.102331 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 5.69s 2026-04-01 00:55:44.102338 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 5.53s 2026-04-01 00:55:44.102345 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 4.80s 2026-04-01 00:55:44.102353 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.62s 2026-04-01 00:55:44.102360 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 4.43s 2026-04-01 00:55:44.102367 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.17s 2026-04-01 00:55:44.102374 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.00s 2026-04-01 00:55:44.102382 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 3.91s 2026-04-01 00:55:44.102440 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 3.88s 2026-04-01 00:55:44.102449 | orchestrator | haproxy-config : Add configuration for mariadb when using single external frontend --- 3.86s 2026-04-01 00:55:44.102456 | orchestrator | loadbalancer : Copying checks for services which are enabled ------------ 3.84s 2026-04-01 00:55:44.102463 | orchestrator | haproxy-config : Copying over magnum haproxy config --------------------- 3.80s 2026-04-01 00:55:44.102470 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 3.78s 2026-04-01 00:55:44.102477 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 3.76s 2026-04-01 00:55:44.102484 | orchestrator | loadbalancer : Copying over config.json files for services -------------- 3.65s 2026-04-01 00:55:44.102491 | orchestrator | haproxy-config : Configuring firewall for glance ------------------------ 3.61s 2026-04-01 00:55:44.102497 | orchestrator | haproxy-config : Copying over octavia haproxy config -------------------- 3.54s 2026-04-01 00:55:44.102503 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 3.51s 2026-04-01 00:55:44.102510 | orchestrator | haproxy-config : Add configuration for glance when using single external frontend --- 3.40s 2026-04-01 00:55:47.129249 | orchestrator | 2026-04-01 00:55:47 | INFO  | Task f153c2b9-e292-4a37-a796-0fb954d2066b is in state STARTED 2026-04-01 00:55:47.131254 | orchestrator | 2026-04-01 00:55:47 | INFO  | Task ef073029-b69f-4c7d-87f3-f4a2f64284db is in state STARTED 2026-04-01 00:55:47.132998 | orchestrator | 2026-04-01 00:55:47 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:55:47.134039 | orchestrator | 2026-04-01 00:55:47 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:55:50.176689 | orchestrator | 2026-04-01 00:55:50 | INFO  | Task f153c2b9-e292-4a37-a796-0fb954d2066b is in state STARTED 2026-04-01 00:55:50.176790 | orchestrator | 2026-04-01 00:55:50 | INFO  | Task ef073029-b69f-4c7d-87f3-f4a2f64284db is in state STARTED 2026-04-01 00:55:50.177857 | orchestrator | 2026-04-01 00:55:50 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:55:50.177906 | orchestrator | 2026-04-01 00:55:50 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:55:53.210783 | orchestrator | 2026-04-01 00:55:53 | INFO  | Task f153c2b9-e292-4a37-a796-0fb954d2066b is in state STARTED 2026-04-01 00:55:53.211539 | orchestrator | 2026-04-01 00:55:53 | INFO  | Task ef073029-b69f-4c7d-87f3-f4a2f64284db is in state STARTED 2026-04-01 00:55:53.212817 | orchestrator | 2026-04-01 00:55:53 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:55:53.212864 | orchestrator | 2026-04-01 00:55:53 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:55:56.252242 | orchestrator | 2026-04-01 00:55:56 | INFO  | Task f153c2b9-e292-4a37-a796-0fb954d2066b is in state STARTED 2026-04-01 00:55:56.253117 | orchestrator | 2026-04-01 00:55:56 | INFO  | Task ef073029-b69f-4c7d-87f3-f4a2f64284db is in state STARTED 2026-04-01 00:55:56.254248 | orchestrator | 2026-04-01 00:55:56 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:55:56.254279 | orchestrator | 2026-04-01 00:55:56 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:55:59.291295 | orchestrator | 2026-04-01 00:55:59 | INFO  | Task f153c2b9-e292-4a37-a796-0fb954d2066b is in state STARTED 2026-04-01 00:55:59.292523 | orchestrator | 2026-04-01 00:55:59 | INFO  | Task ef073029-b69f-4c7d-87f3-f4a2f64284db is in state STARTED 2026-04-01 00:55:59.295248 | orchestrator | 2026-04-01 00:55:59 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:55:59.295312 | orchestrator | 2026-04-01 00:55:59 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:56:02.350404 | orchestrator | 2026-04-01 00:56:02 | INFO  | Task f153c2b9-e292-4a37-a796-0fb954d2066b is in state STARTED 2026-04-01 00:56:02.351174 | orchestrator | 2026-04-01 00:56:02 | INFO  | Task ef073029-b69f-4c7d-87f3-f4a2f64284db is in state STARTED 2026-04-01 00:56:02.351816 | orchestrator | 2026-04-01 00:56:02 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:56:02.351856 | orchestrator | 2026-04-01 00:56:02 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:56:05.384301 | orchestrator | 2026-04-01 00:56:05 | INFO  | Task f153c2b9-e292-4a37-a796-0fb954d2066b is in state STARTED 2026-04-01 00:56:05.384684 | orchestrator | 2026-04-01 00:56:05 | INFO  | Task ef073029-b69f-4c7d-87f3-f4a2f64284db is in state STARTED 2026-04-01 00:56:05.385615 | orchestrator | 2026-04-01 00:56:05 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:56:05.385671 | orchestrator | 2026-04-01 00:56:05 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:56:08.413568 | orchestrator | 2026-04-01 00:56:08 | INFO  | Task f153c2b9-e292-4a37-a796-0fb954d2066b is in state STARTED 2026-04-01 00:56:08.414045 | orchestrator | 2026-04-01 00:56:08 | INFO  | Task ef073029-b69f-4c7d-87f3-f4a2f64284db is in state STARTED 2026-04-01 00:56:08.414838 | orchestrator | 2026-04-01 00:56:08 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:56:08.414905 | orchestrator | 2026-04-01 00:56:08 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:56:11.450689 | orchestrator | 2026-04-01 00:56:11 | INFO  | Task f153c2b9-e292-4a37-a796-0fb954d2066b is in state SUCCESS 2026-04-01 00:56:11.452303 | orchestrator | 2026-04-01 00:56:11.452342 | orchestrator | 2026-04-01 00:56:11.452347 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-01 00:56:11.452351 | orchestrator | 2026-04-01 00:56:11.452355 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-01 00:56:11.452358 | orchestrator | Wednesday 01 April 2026 00:55:45 +0000 (0:00:00.306) 0:00:00.306 ******* 2026-04-01 00:56:11.452361 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:56:11.452365 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:56:11.452382 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:56:11.452385 | orchestrator | 2026-04-01 00:56:11.452388 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-01 00:56:11.452392 | orchestrator | Wednesday 01 April 2026 00:55:46 +0000 (0:00:00.279) 0:00:00.586 ******* 2026-04-01 00:56:11.452395 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-04-01 00:56:11.452398 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-04-01 00:56:11.452402 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-04-01 00:56:11.452405 | orchestrator | 2026-04-01 00:56:11.452408 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-04-01 00:56:11.452411 | orchestrator | 2026-04-01 00:56:11.452414 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-01 00:56:11.452417 | orchestrator | Wednesday 01 April 2026 00:55:46 +0000 (0:00:00.306) 0:00:00.893 ******* 2026-04-01 00:56:11.452421 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:56:11.452424 | orchestrator | 2026-04-01 00:56:11.452427 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-04-01 00:56:11.452430 | orchestrator | Wednesday 01 April 2026 00:55:47 +0000 (0:00:00.565) 0:00:01.458 ******* 2026-04-01 00:56:11.452433 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-01 00:56:11.452437 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-01 00:56:11.452440 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-01 00:56:11.452443 | orchestrator | 2026-04-01 00:56:11.452446 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-04-01 00:56:11.452449 | orchestrator | Wednesday 01 April 2026 00:55:48 +0000 (0:00:01.027) 0:00:02.486 ******* 2026-04-01 00:56:11.452459 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-01 00:56:11.452465 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-01 00:56:11.452475 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-01 00:56:11.452484 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release//opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-01 00:56:11.452491 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release//opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-01 00:56:11.452495 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release//opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-01 00:56:11.452501 | orchestrator | 2026-04-01 00:56:11.452504 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-01 00:56:11.452507 | orchestrator | Wednesday 01 April 2026 00:55:49 +0000 (0:00:01.340) 0:00:03.827 ******* 2026-04-01 00:56:11.452510 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:56:11.452514 | orchestrator | 2026-04-01 00:56:11.452521 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-04-01 00:56:11.452525 | orchestrator | Wednesday 01 April 2026 00:55:49 +0000 (0:00:00.477) 0:00:04.304 ******* 2026-04-01 00:56:11.452528 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-01 00:56:11.452531 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-01 00:56:11.452536 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-01 00:56:11.452571 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release//opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-01 00:56:11.452585 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release//opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-01 00:56:11.452589 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release//opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-01 00:56:11.452593 | orchestrator | 2026-04-01 00:56:11.452596 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-04-01 00:56:11.452630 | orchestrator | Wednesday 01 April 2026 00:55:52 +0000 (0:00:02.847) 0:00:07.151 ******* 2026-04-01 00:56:11.452709 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-01 00:56:11.452723 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release//opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-01 00:56:11.452732 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:11.452739 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-01 00:56:11.452747 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release//opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-01 00:56:11.452753 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:11.452758 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-01 00:56:11.452772 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release//opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-01 00:56:11.452778 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:11.452781 | orchestrator | 2026-04-01 00:56:11.452785 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-04-01 00:56:11.452788 | orchestrator | Wednesday 01 April 2026 00:55:53 +0000 (0:00:00.770) 0:00:07.922 ******* 2026-04-01 00:56:11.452791 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-01 00:56:11.452797 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release//opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-01 00:56:11.452803 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:11.452806 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-01 00:56:11.452812 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release//opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-01 00:56:11.452816 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:11.452819 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-01 00:56:11.452828 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release//opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-01 00:56:11.452833 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:11.452836 | orchestrator | 2026-04-01 00:56:11.452840 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-04-01 00:56:11.452843 | orchestrator | Wednesday 01 April 2026 00:55:54 +0000 (0:00:01.052) 0:00:08.974 ******* 2026-04-01 00:56:11.452848 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-01 00:56:11.452852 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-01 00:56:11.452856 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-01 00:56:11.452861 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release//opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-01 00:56:11.452869 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release//opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-01 00:56:11.452873 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release//opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-01 00:56:11.452877 | orchestrator | 2026-04-01 00:56:11.452880 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-04-01 00:56:11.452883 | orchestrator | Wednesday 01 April 2026 00:55:57 +0000 (0:00:02.479) 0:00:11.454 ******* 2026-04-01 00:56:11.452886 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:56:11.452890 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:56:11.452893 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:56:11.452896 | orchestrator | 2026-04-01 00:56:11.452899 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-04-01 00:56:11.452902 | orchestrator | Wednesday 01 April 2026 00:55:59 +0000 (0:00:02.578) 0:00:14.033 ******* 2026-04-01 00:56:11.452905 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:56:11.452909 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:56:11.452912 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:56:11.452915 | orchestrator | 2026-04-01 00:56:11.452921 | orchestrator | TASK [service-check-containers : opensearch | Check containers] **************** 2026-04-01 00:56:11.452924 | orchestrator | Wednesday 01 April 2026 00:56:01 +0000 (0:00:01.766) 0:00:15.799 ******* 2026-04-01 00:56:11.452929 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-01 00:56:11.452933 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-01 00:56:11.452939 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-01 00:56:11.452942 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release//opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-01 00:56:11.452950 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release//opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-01 00:56:11.452955 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release//opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-01 00:56:11.452958 | orchestrator | 2026-04-01 00:56:11.452962 | orchestrator | TASK [service-check-containers : opensearch | Notify handlers to restart containers] *** 2026-04-01 00:56:11.452965 | orchestrator | Wednesday 01 April 2026 00:56:03 +0000 (0:00:02.078) 0:00:17.877 ******* 2026-04-01 00:56:11.452968 | orchestrator | changed: [testbed-node-0] => { 2026-04-01 00:56:11.452972 | orchestrator |  "msg": "Notifying handlers" 2026-04-01 00:56:11.452975 | orchestrator | } 2026-04-01 00:56:11.452978 | orchestrator | changed: [testbed-node-1] => { 2026-04-01 00:56:11.452981 | orchestrator |  "msg": "Notifying handlers" 2026-04-01 00:56:11.452985 | orchestrator | } 2026-04-01 00:56:11.452988 | orchestrator | changed: [testbed-node-2] => { 2026-04-01 00:56:11.452991 | orchestrator |  "msg": "Notifying handlers" 2026-04-01 00:56:11.452994 | orchestrator | } 2026-04-01 00:56:11.452997 | orchestrator | 2026-04-01 00:56:11.453001 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-01 00:56:11.453004 | orchestrator | Wednesday 01 April 2026 00:56:03 +0000 (0:00:00.474) 0:00:18.351 ******* 2026-04-01 00:56:11.453007 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-01 00:56:11.453015 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release//opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-01 00:56:11.453019 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:11.453022 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-01 00:56:11.453027 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-01 00:56:11.453031 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release//opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-01 00:56:11.453036 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:11.453041 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release//opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-01 00:56:11.453045 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:11.453048 | orchestrator | 2026-04-01 00:56:11.453051 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-01 00:56:11.453055 | orchestrator | Wednesday 01 April 2026 00:56:04 +0000 (0:00:00.779) 0:00:19.131 ******* 2026-04-01 00:56:11.453058 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:11.453061 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:56:11.453064 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:56:11.453067 | orchestrator | 2026-04-01 00:56:11.453070 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-04-01 00:56:11.453073 | orchestrator | Wednesday 01 April 2026 00:56:04 +0000 (0:00:00.276) 0:00:19.407 ******* 2026-04-01 00:56:11.453077 | orchestrator | 2026-04-01 00:56:11.453080 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-04-01 00:56:11.453083 | orchestrator | Wednesday 01 April 2026 00:56:05 +0000 (0:00:00.067) 0:00:19.475 ******* 2026-04-01 00:56:11.453086 | orchestrator | 2026-04-01 00:56:11.453089 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-04-01 00:56:11.453092 | orchestrator | Wednesday 01 April 2026 00:56:05 +0000 (0:00:00.064) 0:00:19.540 ******* 2026-04-01 00:56:11.453096 | orchestrator | 2026-04-01 00:56:11.453099 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-04-01 00:56:11.453102 | orchestrator | Wednesday 01 April 2026 00:56:05 +0000 (0:00:00.067) 0:00:19.607 ******* 2026-04-01 00:56:11.453105 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:11.453108 | orchestrator | 2026-04-01 00:56:11.453111 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-04-01 00:56:11.453116 | orchestrator | Wednesday 01 April 2026 00:56:05 +0000 (0:00:00.790) 0:00:20.398 ******* 2026-04-01 00:56:11.453120 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:56:11.453123 | orchestrator | 2026-04-01 00:56:11.453126 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-04-01 00:56:11.453129 | orchestrator | Wednesday 01 April 2026 00:56:06 +0000 (0:00:00.291) 0:00:20.690 ******* 2026-04-01 00:56:11.453137 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=2.19.5.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fopensearch\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_h0nzper0/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_h0nzper0/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_h0nzper0/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_h0nzper0/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=2.19.5.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fopensearch: Bad Request (\"invalid reference format\")\\n'"} 2026-04-01 00:56:11.453143 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=2.19.5.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fopensearch\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_3yvrhyi9/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_3yvrhyi9/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_3yvrhyi9/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_3yvrhyi9/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=2.19.5.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fopensearch: Bad Request (\"invalid reference format\")\\n'"} 2026-04-01 00:56:11.453152 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=2.19.5.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fopensearch\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_32jm_q15/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_32jm_q15/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_32jm_q15/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_32jm_q15/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=2.19.5.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fopensearch: Bad Request (\"invalid reference format\")\\n'"} 2026-04-01 00:56:11.453156 | orchestrator | 2026-04-01 00:56:11.453159 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 00:56:11.453163 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-04-01 00:56:11.453167 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=1  skipped=4  rescued=0 ignored=0 2026-04-01 00:56:11.453170 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=1  skipped=4  rescued=0 ignored=0 2026-04-01 00:56:11.453173 | orchestrator | 2026-04-01 00:56:11.453222 | orchestrator | 2026-04-01 00:56:11.453229 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 00:56:11.453232 | orchestrator | Wednesday 01 April 2026 00:56:09 +0000 (0:00:02.925) 0:00:23.616 ******* 2026-04-01 00:56:11.453235 | orchestrator | =============================================================================== 2026-04-01 00:56:11.453239 | orchestrator | opensearch : Restart opensearch container ------------------------------- 2.93s 2026-04-01 00:56:11.453242 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.85s 2026-04-01 00:56:11.453245 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.58s 2026-04-01 00:56:11.453248 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.48s 2026-04-01 00:56:11.453251 | orchestrator | service-check-containers : opensearch | Check containers ---------------- 2.08s 2026-04-01 00:56:11.453254 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.77s 2026-04-01 00:56:11.453257 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.34s 2026-04-01 00:56:11.453261 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.05s 2026-04-01 00:56:11.453264 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 1.03s 2026-04-01 00:56:11.453267 | orchestrator | opensearch : Disable shard allocation ----------------------------------- 0.79s 2026-04-01 00:56:11.453270 | orchestrator | service-check-containers : Include tasks -------------------------------- 0.78s 2026-04-01 00:56:11.453273 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 0.77s 2026-04-01 00:56:11.453276 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.57s 2026-04-01 00:56:11.453280 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.48s 2026-04-01 00:56:11.453283 | orchestrator | service-check-containers : opensearch | Notify handlers to restart containers --- 0.47s 2026-04-01 00:56:11.453286 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.31s 2026-04-01 00:56:11.453289 | orchestrator | opensearch : Perform a flush -------------------------------------------- 0.29s 2026-04-01 00:56:11.453292 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.28s 2026-04-01 00:56:11.453295 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.28s 2026-04-01 00:56:11.453299 | orchestrator | opensearch : Flush handlers --------------------------------------------- 0.20s 2026-04-01 00:56:11.453302 | orchestrator | 2026-04-01 00:56:11 | INFO  | Task ef073029-b69f-4c7d-87f3-f4a2f64284db is in state STARTED 2026-04-01 00:56:11.455001 | orchestrator | 2026-04-01 00:56:11 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:56:11.455289 | orchestrator | 2026-04-01 00:56:11 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:56:14.489175 | orchestrator | 2026-04-01 00:56:14 | INFO  | Task ef073029-b69f-4c7d-87f3-f4a2f64284db is in state STARTED 2026-04-01 00:56:14.490143 | orchestrator | 2026-04-01 00:56:14 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:56:14.492890 | orchestrator | 2026-04-01 00:56:14 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:56:17.524461 | orchestrator | 2026-04-01 00:56:17 | INFO  | Task ef073029-b69f-4c7d-87f3-f4a2f64284db is in state STARTED 2026-04-01 00:56:17.527088 | orchestrator | 2026-04-01 00:56:17 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:56:17.527139 | orchestrator | 2026-04-01 00:56:17 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:56:20.568415 | orchestrator | 2026-04-01 00:56:20 | INFO  | Task ef073029-b69f-4c7d-87f3-f4a2f64284db is in state STARTED 2026-04-01 00:56:20.569421 | orchestrator | 2026-04-01 00:56:20 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:56:20.569533 | orchestrator | 2026-04-01 00:56:20 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:56:23.619038 | orchestrator | 2026-04-01 00:56:23 | INFO  | Task ef073029-b69f-4c7d-87f3-f4a2f64284db is in state STARTED 2026-04-01 00:56:23.621326 | orchestrator | 2026-04-01 00:56:23 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:56:23.621410 | orchestrator | 2026-04-01 00:56:23 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:56:26.675625 | orchestrator | 2026-04-01 00:56:26 | INFO  | Task ef073029-b69f-4c7d-87f3-f4a2f64284db is in state STARTED 2026-04-01 00:56:26.675785 | orchestrator | 2026-04-01 00:56:26 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:56:26.675801 | orchestrator | 2026-04-01 00:56:26 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:56:29.722262 | orchestrator | 2026-04-01 00:56:29 | INFO  | Task ef073029-b69f-4c7d-87f3-f4a2f64284db is in state STARTED 2026-04-01 00:56:29.724463 | orchestrator | 2026-04-01 00:56:29 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:56:29.724531 | orchestrator | 2026-04-01 00:56:29 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:56:32.769598 | orchestrator | 2026-04-01 00:56:32 | INFO  | Task ef073029-b69f-4c7d-87f3-f4a2f64284db is in state STARTED 2026-04-01 00:56:32.771289 | orchestrator | 2026-04-01 00:56:32 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:56:32.771370 | orchestrator | 2026-04-01 00:56:32 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:56:35.811929 | orchestrator | 2026-04-01 00:56:35 | INFO  | Task ef073029-b69f-4c7d-87f3-f4a2f64284db is in state STARTED 2026-04-01 00:56:35.813542 | orchestrator | 2026-04-01 00:56:35 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:56:35.813586 | orchestrator | 2026-04-01 00:56:35 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:56:38.858610 | orchestrator | 2026-04-01 00:56:38 | INFO  | Task ef073029-b69f-4c7d-87f3-f4a2f64284db is in state STARTED 2026-04-01 00:56:38.860688 | orchestrator | 2026-04-01 00:56:38 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:56:38.860753 | orchestrator | 2026-04-01 00:56:38 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:56:41.905691 | orchestrator | 2026-04-01 00:56:41 | INFO  | Task ef073029-b69f-4c7d-87f3-f4a2f64284db is in state STARTED 2026-04-01 00:56:41.907401 | orchestrator | 2026-04-01 00:56:41 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:56:41.907447 | orchestrator | 2026-04-01 00:56:41 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:56:44.947294 | orchestrator | 2026-04-01 00:56:44 | INFO  | Task ef073029-b69f-4c7d-87f3-f4a2f64284db is in state STARTED 2026-04-01 00:56:44.947925 | orchestrator | 2026-04-01 00:56:44 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:56:44.948267 | orchestrator | 2026-04-01 00:56:44 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:56:47.991059 | orchestrator | 2026-04-01 00:56:47 | INFO  | Task ef073029-b69f-4c7d-87f3-f4a2f64284db is in state STARTED 2026-04-01 00:56:47.992624 | orchestrator | 2026-04-01 00:56:47 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:56:47.992686 | orchestrator | 2026-04-01 00:56:47 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:56:51.034273 | orchestrator | 2026-04-01 00:56:51 | INFO  | Task ef073029-b69f-4c7d-87f3-f4a2f64284db is in state STARTED 2026-04-01 00:56:51.037265 | orchestrator | 2026-04-01 00:56:51 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:56:51.037352 | orchestrator | 2026-04-01 00:56:51 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:56:54.087595 | orchestrator | 2026-04-01 00:56:54 | INFO  | Task ef073029-b69f-4c7d-87f3-f4a2f64284db is in state STARTED 2026-04-01 00:56:54.088132 | orchestrator | 2026-04-01 00:56:54 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:56:54.088169 | orchestrator | 2026-04-01 00:56:54 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:56:57.115173 | orchestrator | 2026-04-01 00:56:57 | INFO  | Task ef073029-b69f-4c7d-87f3-f4a2f64284db is in state STARTED 2026-04-01 00:56:57.116798 | orchestrator | 2026-04-01 00:56:57 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:56:57.116857 | orchestrator | 2026-04-01 00:56:57 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:57:00.155407 | orchestrator | 2026-04-01 00:57:00 | INFO  | Task ef073029-b69f-4c7d-87f3-f4a2f64284db is in state SUCCESS 2026-04-01 00:57:00.156365 | orchestrator | 2026-04-01 00:57:00.156432 | orchestrator | 2026-04-01 00:57:00.156442 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2026-04-01 00:57:00.156450 | orchestrator | 2026-04-01 00:57:00.156456 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-04-01 00:57:00.156463 | orchestrator | Wednesday 01 April 2026 00:55:45 +0000 (0:00:00.100) 0:00:00.100 ******* 2026-04-01 00:57:00.156470 | orchestrator | ok: [localhost] => { 2026-04-01 00:57:00.156478 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2026-04-01 00:57:00.156484 | orchestrator | } 2026-04-01 00:57:00.156490 | orchestrator | 2026-04-01 00:57:00.156496 | orchestrator | TASK [Check MariaDB service] *************************************************** 2026-04-01 00:57:00.156502 | orchestrator | Wednesday 01 April 2026 00:55:45 +0000 (0:00:00.049) 0:00:00.150 ******* 2026-04-01 00:57:00.156509 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2026-04-01 00:57:00.156518 | orchestrator | ...ignoring 2026-04-01 00:57:00.156524 | orchestrator | 2026-04-01 00:57:00.156532 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2026-04-01 00:57:00.156538 | orchestrator | Wednesday 01 April 2026 00:55:48 +0000 (0:00:02.948) 0:00:03.098 ******* 2026-04-01 00:57:00.156545 | orchestrator | skipping: [localhost] 2026-04-01 00:57:00.156551 | orchestrator | 2026-04-01 00:57:00.156556 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2026-04-01 00:57:00.156563 | orchestrator | Wednesday 01 April 2026 00:55:48 +0000 (0:00:00.056) 0:00:03.155 ******* 2026-04-01 00:57:00.156569 | orchestrator | ok: [localhost] 2026-04-01 00:57:00.156576 | orchestrator | 2026-04-01 00:57:00.156583 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-01 00:57:00.156589 | orchestrator | 2026-04-01 00:57:00.156596 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-01 00:57:00.156602 | orchestrator | Wednesday 01 April 2026 00:55:49 +0000 (0:00:00.203) 0:00:03.358 ******* 2026-04-01 00:57:00.156608 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:57:00.156615 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:57:00.156621 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:57:00.156628 | orchestrator | 2026-04-01 00:57:00.156634 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-01 00:57:00.156710 | orchestrator | Wednesday 01 April 2026 00:55:49 +0000 (0:00:00.288) 0:00:03.647 ******* 2026-04-01 00:57:00.156721 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-04-01 00:57:00.156729 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-04-01 00:57:00.156761 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-04-01 00:57:00.156767 | orchestrator | 2026-04-01 00:57:00.156774 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-04-01 00:57:00.156780 | orchestrator | 2026-04-01 00:57:00.156788 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-04-01 00:57:00.157268 | orchestrator | Wednesday 01 April 2026 00:55:49 +0000 (0:00:00.382) 0:00:04.029 ******* 2026-04-01 00:57:00.157281 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-01 00:57:00.157289 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-01 00:57:00.157297 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-01 00:57:00.157303 | orchestrator | 2026-04-01 00:57:00.157310 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-01 00:57:00.157317 | orchestrator | Wednesday 01 April 2026 00:55:50 +0000 (0:00:00.344) 0:00:04.374 ******* 2026-04-01 00:57:00.157325 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:57:00.157333 | orchestrator | 2026-04-01 00:57:00.157339 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-04-01 00:57:00.157346 | orchestrator | Wednesday 01 April 2026 00:55:50 +0000 (0:00:00.734) 0:00:05.108 ******* 2026-04-01 00:57:00.157484 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-01 00:57:00.157497 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-01 00:57:00.157519 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-01 00:57:00.157547 | orchestrator | 2026-04-01 00:57:00.157556 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-04-01 00:57:00.157562 | orchestrator | Wednesday 01 April 2026 00:55:53 +0000 (0:00:02.721) 0:00:07.829 ******* 2026-04-01 00:57:00.157569 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:57:00.157577 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:57:00.157583 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:57:00.157589 | orchestrator | 2026-04-01 00:57:00.157596 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-04-01 00:57:00.157602 | orchestrator | Wednesday 01 April 2026 00:55:54 +0000 (0:00:00.547) 0:00:08.377 ******* 2026-04-01 00:57:00.157609 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:57:00.157615 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:57:00.157621 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:57:00.157627 | orchestrator | 2026-04-01 00:57:00.157634 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-04-01 00:57:00.157640 | orchestrator | Wednesday 01 April 2026 00:55:55 +0000 (0:00:01.341) 0:00:09.719 ******* 2026-04-01 00:57:00.157648 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-01 00:57:00.157670 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-01 00:57:00.157678 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-01 00:57:00.157690 | orchestrator | 2026-04-01 00:57:00.157696 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-04-01 00:57:00.157703 | orchestrator | Wednesday 01 April 2026 00:55:59 +0000 (0:00:03.652) 0:00:13.371 ******* 2026-04-01 00:57:00.157709 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:57:00.157716 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:57:00.157722 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:57:00.157728 | orchestrator | 2026-04-01 00:57:00.157735 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-04-01 00:57:00.157741 | orchestrator | Wednesday 01 April 2026 00:56:00 +0000 (0:00:01.022) 0:00:14.394 ******* 2026-04-01 00:57:00.157747 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:57:00.157754 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:57:00.157761 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:57:00.157767 | orchestrator | 2026-04-01 00:57:00.157773 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-01 00:57:00.157780 | orchestrator | Wednesday 01 April 2026 00:56:03 +0000 (0:00:03.727) 0:00:18.122 ******* 2026-04-01 00:57:00.157786 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:57:00.157793 | orchestrator | 2026-04-01 00:57:00.157799 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-04-01 00:57:00.157805 | orchestrator | Wednesday 01 April 2026 00:56:04 +0000 (0:00:00.488) 0:00:18.610 ******* 2026-04-01 00:57:00.157824 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-01 00:57:00.157837 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:57:00.157844 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-01 00:57:00.157851 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:57:00.157870 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-01 00:57:00.157882 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:57:00.157889 | orchestrator | 2026-04-01 00:57:00.157895 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-04-01 00:57:00.157902 | orchestrator | Wednesday 01 April 2026 00:56:06 +0000 (0:00:02.546) 0:00:21.157 ******* 2026-04-01 00:57:00.157909 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-01 00:57:00.157916 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:57:00.157931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-01 00:57:00.157964 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:57:00.157972 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-01 00:57:00.157979 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:57:00.157986 | orchestrator | 2026-04-01 00:57:00.157993 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-04-01 00:57:00.158000 | orchestrator | Wednesday 01 April 2026 00:56:09 +0000 (0:00:02.504) 0:00:23.662 ******* 2026-04-01 00:57:00.158129 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-01 00:57:00.158149 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:57:00.158157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-01 00:57:00.158165 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:57:00.158179 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-01 00:57:00.158191 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:57:00.158198 | orchestrator | 2026-04-01 00:57:00.158204 | orchestrator | TASK [service-check-containers : mariadb | Check containers] ******************* 2026-04-01 00:57:00.158216 | orchestrator | Wednesday 01 April 2026 00:56:11 +0000 (0:00:02.220) 0:00:25.883 ******* 2026-04-01 00:57:00.158224 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-01 00:57:00.158235 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-01 00:57:00.158254 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-01 00:57:00.158262 | orchestrator | 2026-04-01 00:57:00.158269 | orchestrator | TASK [service-check-containers : mariadb | Notify handlers to restart containers] *** 2026-04-01 00:57:00.158276 | orchestrator | Wednesday 01 April 2026 00:56:14 +0000 (0:00:02.610) 0:00:28.494 ******* 2026-04-01 00:57:00.158283 | orchestrator | changed: [testbed-node-0] => { 2026-04-01 00:57:00.158289 | orchestrator |  "msg": "Notifying handlers" 2026-04-01 00:57:00.158296 | orchestrator | } 2026-04-01 00:57:00.158303 | orchestrator | changed: [testbed-node-1] => { 2026-04-01 00:57:00.158309 | orchestrator |  "msg": "Notifying handlers" 2026-04-01 00:57:00.158316 | orchestrator | } 2026-04-01 00:57:00.158323 | orchestrator | changed: [testbed-node-2] => { 2026-04-01 00:57:00.158329 | orchestrator |  "msg": "Notifying handlers" 2026-04-01 00:57:00.158336 | orchestrator | } 2026-04-01 00:57:00.158342 | orchestrator | 2026-04-01 00:57:00.158350 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-01 00:57:00.158356 | orchestrator | Wednesday 01 April 2026 00:56:14 +0000 (0:00:00.323) 0:00:28.817 ******* 2026-04-01 00:57:00.158367 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-01 00:57:00.158379 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:57:00.158391 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-01 00:57:00.158398 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:57:00.158407 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-01 00:57:00.158419 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:57:00.158425 | orchestrator | 2026-04-01 00:57:00.158432 | orchestrator | TASK [mariadb : Checking for mariadb cluster] ********************************** 2026-04-01 00:57:00.158438 | orchestrator | Wednesday 01 April 2026 00:56:16 +0000 (0:00:02.300) 0:00:31.117 ******* 2026-04-01 00:57:00.158445 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:57:00.158451 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:57:00.158457 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:57:00.158463 | orchestrator | 2026-04-01 00:57:00.158470 | orchestrator | TASK [mariadb : Cleaning up temp file on localhost] **************************** 2026-04-01 00:57:00.158476 | orchestrator | Wednesday 01 April 2026 00:56:17 +0000 (0:00:00.440) 0:00:31.558 ******* 2026-04-01 00:57:00.158483 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:57:00.158489 | orchestrator | 2026-04-01 00:57:00.158500 | orchestrator | TASK [mariadb : Stop MariaDB containers] *************************************** 2026-04-01 00:57:00.158507 | orchestrator | Wednesday 01 April 2026 00:56:17 +0000 (0:00:00.114) 0:00:31.672 ******* 2026-04-01 00:57:00.158513 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:57:00.158520 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:57:00.158526 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:57:00.158533 | orchestrator | 2026-04-01 00:57:00.158540 | orchestrator | TASK [mariadb : Run MariaDB wsrep recovery] ************************************ 2026-04-01 00:57:00.158546 | orchestrator | Wednesday 01 April 2026 00:56:17 +0000 (0:00:00.300) 0:00:31.973 ******* 2026-04-01 00:57:00.158553 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:57:00.158559 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:57:00.158565 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:57:00.158571 | orchestrator | 2026-04-01 00:57:00.158578 | orchestrator | TASK [mariadb : Copying MariaDB log file to /tmp] ****************************** 2026-04-01 00:57:00.158585 | orchestrator | Wednesday 01 April 2026 00:56:17 +0000 (0:00:00.278) 0:00:32.251 ******* 2026-04-01 00:57:00.158591 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:57:00.158598 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:57:00.158605 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:57:00.158612 | orchestrator | 2026-04-01 00:57:00.158618 | orchestrator | TASK [mariadb : Get MariaDB wsrep recovery seqno] ****************************** 2026-04-01 00:57:00.158625 | orchestrator | Wednesday 01 April 2026 00:56:18 +0000 (0:00:00.279) 0:00:32.531 ******* 2026-04-01 00:57:00.158631 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:57:00.158638 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:57:00.158644 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:57:00.158650 | orchestrator | 2026-04-01 00:57:00.158657 | orchestrator | TASK [mariadb : Removing MariaDB log file from /tmp] *************************** 2026-04-01 00:57:00.158664 | orchestrator | Wednesday 01 April 2026 00:56:18 +0000 (0:00:00.476) 0:00:33.008 ******* 2026-04-01 00:57:00.158671 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:57:00.158677 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:57:00.158684 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:57:00.158690 | orchestrator | 2026-04-01 00:57:00.158697 | orchestrator | TASK [mariadb : Registering MariaDB seqno variable] **************************** 2026-04-01 00:57:00.158703 | orchestrator | Wednesday 01 April 2026 00:56:18 +0000 (0:00:00.313) 0:00:33.321 ******* 2026-04-01 00:57:00.158710 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:57:00.158722 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:57:00.158728 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:57:00.158735 | orchestrator | 2026-04-01 00:57:00.158741 | orchestrator | TASK [mariadb : Comparing seqno value on all mariadb hosts] ******************** 2026-04-01 00:57:00.158748 | orchestrator | Wednesday 01 April 2026 00:56:19 +0000 (0:00:00.289) 0:00:33.611 ******* 2026-04-01 00:57:00.158754 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-01 00:57:00.158761 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-01 00:57:00.158768 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-01 00:57:00.158775 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:57:00.158782 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-01 00:57:00.158788 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-01 00:57:00.158795 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-01 00:57:00.158801 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:57:00.158807 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-01 00:57:00.158813 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-01 00:57:00.158820 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-01 00:57:00.158826 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:57:00.158832 | orchestrator | 2026-04-01 00:57:00.158839 | orchestrator | TASK [mariadb : Writing hostname of host with the largest seqno to temp file] *** 2026-04-01 00:57:00.158846 | orchestrator | Wednesday 01 April 2026 00:56:19 +0000 (0:00:00.369) 0:00:33.980 ******* 2026-04-01 00:57:00.158853 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:57:00.159018 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:57:00.159032 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:57:00.159039 | orchestrator | 2026-04-01 00:57:00.159046 | orchestrator | TASK [mariadb : Registering mariadb_recover_inventory_name from temp file] ***** 2026-04-01 00:57:00.159052 | orchestrator | Wednesday 01 April 2026 00:56:20 +0000 (0:00:00.452) 0:00:34.433 ******* 2026-04-01 00:57:00.159059 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:57:00.159065 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:57:00.159072 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:57:00.159078 | orchestrator | 2026-04-01 00:57:00.159084 | orchestrator | TASK [mariadb : Store bootstrap and master hostnames into facts] *************** 2026-04-01 00:57:00.159091 | orchestrator | Wednesday 01 April 2026 00:56:20 +0000 (0:00:00.326) 0:00:34.759 ******* 2026-04-01 00:57:00.159097 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:57:00.159104 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:57:00.159111 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:57:00.159117 | orchestrator | 2026-04-01 00:57:00.159132 | orchestrator | TASK [mariadb : Set grastate.dat file from MariaDB container in bootstrap host] *** 2026-04-01 00:57:00.159139 | orchestrator | Wednesday 01 April 2026 00:56:20 +0000 (0:00:00.292) 0:00:35.052 ******* 2026-04-01 00:57:00.159145 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:57:00.159152 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:57:00.159159 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:57:00.159165 | orchestrator | 2026-04-01 00:57:00.159172 | orchestrator | TASK [mariadb : Starting first MariaDB container] ****************************** 2026-04-01 00:57:00.159178 | orchestrator | Wednesday 01 April 2026 00:56:20 +0000 (0:00:00.286) 0:00:35.338 ******* 2026-04-01 00:57:00.159185 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:57:00.159190 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:57:00.159196 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:57:00.159202 | orchestrator | 2026-04-01 00:57:00.159207 | orchestrator | TASK [mariadb : Wait for first MariaDB container] ****************************** 2026-04-01 00:57:00.159214 | orchestrator | Wednesday 01 April 2026 00:56:21 +0000 (0:00:00.432) 0:00:35.771 ******* 2026-04-01 00:57:00.159220 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:57:00.159225 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:57:00.159244 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:57:00.159252 | orchestrator | 2026-04-01 00:57:00.159258 | orchestrator | TASK [mariadb : Set first MariaDB container as primary] ************************ 2026-04-01 00:57:00.159265 | orchestrator | Wednesday 01 April 2026 00:56:21 +0000 (0:00:00.293) 0:00:36.064 ******* 2026-04-01 00:57:00.159271 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:57:00.159278 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:57:00.159285 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:57:00.159291 | orchestrator | 2026-04-01 00:57:00.159298 | orchestrator | TASK [mariadb : Wait for MariaDB to become operational] ************************ 2026-04-01 00:57:00.159304 | orchestrator | Wednesday 01 April 2026 00:56:22 +0000 (0:00:00.306) 0:00:36.370 ******* 2026-04-01 00:57:00.159311 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:57:00.159318 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:57:00.159377 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:57:00.159383 | orchestrator | 2026-04-01 00:57:00.159387 | orchestrator | TASK [mariadb : Restart slave MariaDB container(s)] **************************** 2026-04-01 00:57:00.159391 | orchestrator | Wednesday 01 April 2026 00:56:22 +0000 (0:00:00.313) 0:00:36.684 ******* 2026-04-01 00:57:00.159397 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-01 00:57:00.159402 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:57:00.159420 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-01 00:57:00.159435 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:57:00.159441 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-01 00:57:00.159447 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:57:00.159453 | orchestrator | 2026-04-01 00:57:00.159459 | orchestrator | TASK [mariadb : Wait for slave MariaDB] **************************************** 2026-04-01 00:57:00.159465 | orchestrator | Wednesday 01 April 2026 00:56:24 +0000 (0:00:02.089) 0:00:38.773 ******* 2026-04-01 00:57:00.159473 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:57:00.159483 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:57:00.159492 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:57:00.159497 | orchestrator | 2026-04-01 00:57:00.159503 | orchestrator | TASK [mariadb : Restart master MariaDB container(s)] *************************** 2026-04-01 00:57:00.159509 | orchestrator | Wednesday 01 April 2026 00:56:24 +0000 (0:00:00.478) 0:00:39.252 ******* 2026-04-01 00:57:00.159526 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-01 00:57:00.159540 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:57:00.159546 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-01 00:57:00.159553 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:57:00.159567 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-01 00:57:00.159579 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:57:00.159585 | orchestrator | 2026-04-01 00:57:00.159591 | orchestrator | TASK [mariadb : Wait for master mariadb] *************************************** 2026-04-01 00:57:00.159598 | orchestrator | Wednesday 01 April 2026 00:56:26 +0000 (0:00:02.067) 0:00:41.319 ******* 2026-04-01 00:57:00.159604 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:57:00.159611 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:57:00.159617 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:57:00.159623 | orchestrator | 2026-04-01 00:57:00.159629 | orchestrator | TASK [service-check : mariadb | Get container facts] *************************** 2026-04-01 00:57:00.159636 | orchestrator | Wednesday 01 April 2026 00:56:27 +0000 (0:00:00.308) 0:00:41.627 ******* 2026-04-01 00:57:00.159642 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:57:00.159650 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:57:00.159657 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:57:00.159664 | orchestrator | 2026-04-01 00:57:00.159672 | orchestrator | TASK [service-check : mariadb | Fail if containers are missing or not running] *** 2026-04-01 00:57:00.159678 | orchestrator | Wednesday 01 April 2026 00:56:27 +0000 (0:00:00.306) 0:00:41.934 ******* 2026-04-01 00:57:00.159684 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:57:00.159688 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:57:00.159693 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:57:00.159698 | orchestrator | 2026-04-01 00:57:00.159703 | orchestrator | TASK [service-check : mariadb | Fail if containers are unhealthy] ************** 2026-04-01 00:57:00.159707 | orchestrator | Wednesday 01 April 2026 00:56:28 +0000 (0:00:00.485) 0:00:42.420 ******* 2026-04-01 00:57:00.159712 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:57:00.159717 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:57:00.159721 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:57:00.159727 | orchestrator | 2026-04-01 00:57:00.159734 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-04-01 00:57:00.159743 | orchestrator | Wednesday 01 April 2026 00:56:28 +0000 (0:00:00.518) 0:00:42.939 ******* 2026-04-01 00:57:00.159751 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:57:00.159798 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:57:00.159807 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:57:00.159813 | orchestrator | 2026-04-01 00:57:00.159819 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-04-01 00:57:00.159826 | orchestrator | Wednesday 01 April 2026 00:56:28 +0000 (0:00:00.285) 0:00:43.225 ******* 2026-04-01 00:57:00.159833 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:57:00.159838 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:57:00.159845 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:57:00.159851 | orchestrator | 2026-04-01 00:57:00.159857 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-04-01 00:57:00.159863 | orchestrator | Wednesday 01 April 2026 00:56:30 +0000 (0:00:01.202) 0:00:44.427 ******* 2026-04-01 00:57:00.159878 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:57:00.159886 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:57:00.159892 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:57:00.159899 | orchestrator | 2026-04-01 00:57:00.159906 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-04-01 00:57:00.159913 | orchestrator | Wednesday 01 April 2026 00:56:30 +0000 (0:00:00.324) 0:00:44.752 ******* 2026-04-01 00:57:00.159919 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:57:00.159926 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:57:00.159933 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:57:00.159964 | orchestrator | 2026-04-01 00:57:00.159972 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-04-01 00:57:00.159978 | orchestrator | Wednesday 01 April 2026 00:56:30 +0000 (0:00:00.330) 0:00:45.082 ******* 2026-04-01 00:57:00.159986 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-04-01 00:57:00.159994 | orchestrator | ...ignoring 2026-04-01 00:57:00.160001 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-04-01 00:57:00.160007 | orchestrator | ...ignoring 2026-04-01 00:57:00.160014 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-04-01 00:57:00.160020 | orchestrator | ...ignoring 2026-04-01 00:57:00.160027 | orchestrator | 2026-04-01 00:57:00.160034 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-04-01 00:57:00.160041 | orchestrator | Wednesday 01 April 2026 00:56:41 +0000 (0:00:10.823) 0:00:55.906 ******* 2026-04-01 00:57:00.160049 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:57:00.160056 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:57:00.160062 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:57:00.160069 | orchestrator | 2026-04-01 00:57:00.160080 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-04-01 00:57:00.160115 | orchestrator | Wednesday 01 April 2026 00:56:42 +0000 (0:00:00.524) 0:00:56.430 ******* 2026-04-01 00:57:00.160119 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:57:00.160123 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:57:00.160128 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:57:00.160131 | orchestrator | 2026-04-01 00:57:00.160135 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-04-01 00:57:00.160139 | orchestrator | Wednesday 01 April 2026 00:56:42 +0000 (0:00:00.324) 0:00:56.755 ******* 2026-04-01 00:57:00.160143 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:57:00.160148 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:57:00.160155 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:57:00.160161 | orchestrator | 2026-04-01 00:57:00.160168 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-04-01 00:57:00.160178 | orchestrator | Wednesday 01 April 2026 00:56:42 +0000 (0:00:00.325) 0:00:57.081 ******* 2026-04-01 00:57:00.160185 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:57:00.160203 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:57:00.160210 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:57:00.160217 | orchestrator | 2026-04-01 00:57:00.160225 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-04-01 00:57:00.160232 | orchestrator | Wednesday 01 April 2026 00:56:43 +0000 (0:00:00.306) 0:00:57.388 ******* 2026-04-01 00:57:00.160239 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:57:00.160246 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:57:00.160250 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:57:00.160254 | orchestrator | 2026-04-01 00:57:00.160258 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-04-01 00:57:00.160262 | orchestrator | Wednesday 01 April 2026 00:56:43 +0000 (0:00:00.294) 0:00:57.683 ******* 2026-04-01 00:57:00.160273 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:57:00.160277 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:57:00.160281 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:57:00.160285 | orchestrator | 2026-04-01 00:57:00.160289 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-01 00:57:00.160293 | orchestrator | Wednesday 01 April 2026 00:56:43 +0000 (0:00:00.480) 0:00:58.163 ******* 2026-04-01 00:57:00.160297 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:57:00.160301 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:57:00.160306 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-04-01 00:57:00.160310 | orchestrator | 2026-04-01 00:57:00.160314 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-04-01 00:57:00.160318 | orchestrator | Wednesday 01 April 2026 00:56:44 +0000 (0:00:00.358) 0:00:58.522 ******* 2026-04-01 00:57:00.160324 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=10.11.16.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fmariadb-server\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_resg8f5_/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_resg8f5_/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_resg8f5_/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=10.11.16.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fmariadb-server: Bad Request (\"invalid reference format\")\\n'"} 2026-04-01 00:57:00.160366 | orchestrator | 2026-04-01 00:57:00.160381 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-01 00:57:00.160388 | orchestrator | Wednesday 01 April 2026 00:56:47 +0000 (0:00:03.475) 0:01:01.998 ******* 2026-04-01 00:57:00.160395 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:57:00.160401 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:57:00.160408 | orchestrator | 2026-04-01 00:57:00.160414 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-04-01 00:57:00.160421 | orchestrator | Wednesday 01 April 2026 00:56:48 +0000 (0:00:00.517) 0:01:02.515 ******* 2026-04-01 00:57:00.160426 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:57:00.160431 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:57:00.160435 | orchestrator | 2026-04-01 00:57:00.160439 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-04-01 00:57:00.160447 | orchestrator | Wednesday 01 April 2026 00:56:48 +0000 (0:00:00.191) 0:01:02.706 ******* 2026-04-01 00:57:00.160452 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:57:00.160456 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:57:00.160465 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-04-01 00:57:00.160470 | orchestrator | 2026-04-01 00:57:00.160474 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-04-01 00:57:00.160477 | orchestrator | skipping: no hosts matched 2026-04-01 00:57:00.160481 | orchestrator | 2026-04-01 00:57:00.160485 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-04-01 00:57:00.160489 | orchestrator | 2026-04-01 00:57:00.160493 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-04-01 00:57:00.160497 | orchestrator | Wednesday 01 April 2026 00:56:48 +0000 (0:00:00.237) 0:01:02.943 ******* 2026-04-01 00:57:00.160502 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=10.11.16.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fmariadb-server\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_c83pq5bw/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_c83pq5bw/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_c83pq5bw/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_c83pq5bw/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=10.11.16.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fmariadb-server: Bad Request (\"invalid reference format\")\\n'"} 2026-04-01 00:57:00.160506 | orchestrator | 2026-04-01 00:57:00.160510 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 00:57:00.160514 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-04-01 00:57:00.160519 | orchestrator | testbed-node-0 : ok=20  changed=9  unreachable=0 failed=1  skipped=33  rescued=0 ignored=1  2026-04-01 00:57:00.160530 | orchestrator | testbed-node-1 : ok=16  changed=7  unreachable=0 failed=1  skipped=38  rescued=0 ignored=1  2026-04-01 00:57:00.160538 | orchestrator | testbed-node-2 : ok=16  changed=7  unreachable=0 failed=0 skipped=38  rescued=0 ignored=1  2026-04-01 00:57:00.160544 | orchestrator | 2026-04-01 00:57:00.160548 | orchestrator | 2026-04-01 00:57:00.160552 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 00:57:00.160556 | orchestrator | Wednesday 01 April 2026 00:56:57 +0000 (0:00:09.097) 0:01:12.041 ******* 2026-04-01 00:57:00.160560 | orchestrator | =============================================================================== 2026-04-01 00:57:00.160564 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.82s 2026-04-01 00:57:00.160568 | orchestrator | mariadb : Restart MariaDB container ------------------------------------- 9.10s 2026-04-01 00:57:00.160574 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 3.73s 2026-04-01 00:57:00.160578 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.65s 2026-04-01 00:57:00.160582 | orchestrator | mariadb : Running MariaDB bootstrap container --------------------------- 3.48s 2026-04-01 00:57:00.160606 | orchestrator | Check MariaDB service --------------------------------------------------- 2.95s 2026-04-01 00:57:00.160612 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 2.72s 2026-04-01 00:57:00.160615 | orchestrator | service-check-containers : mariadb | Check containers ------------------- 2.61s 2026-04-01 00:57:00.160619 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 2.55s 2026-04-01 00:57:00.160623 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 2.50s 2026-04-01 00:57:00.160627 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.30s 2026-04-01 00:57:00.160631 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 2.22s 2026-04-01 00:57:00.160635 | orchestrator | mariadb : Restart slave MariaDB container(s) ---------------------------- 2.09s 2026-04-01 00:57:00.160639 | orchestrator | mariadb : Restart master MariaDB container(s) --------------------------- 2.07s 2026-04-01 00:57:00.160643 | orchestrator | mariadb : Copying over my.cnf for mariabackup --------------------------- 1.34s 2026-04-01 00:57:00.160646 | orchestrator | mariadb : Create MariaDB volume ----------------------------------------- 1.20s 2026-04-01 00:57:00.160650 | orchestrator | mariadb : Copying over config.json files for mariabackup ---------------- 1.02s 2026-04-01 00:57:00.160654 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.73s 2026-04-01 00:57:00.160658 | orchestrator | mariadb : Ensuring database backup config directory exists -------------- 0.55s 2026-04-01 00:57:00.160662 | orchestrator | mariadb : Divide hosts by their MariaDB service port liveness ----------- 0.52s 2026-04-01 00:57:00.160666 | orchestrator | 2026-04-01 00:57:00 | INFO  | Task ce2837ac-6bb1-4072-9755-f78624891ac2 is in state STARTED 2026-04-01 00:57:00.160670 | orchestrator | 2026-04-01 00:57:00 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:57:00.161438 | orchestrator | 2026-04-01 00:57:00 | INFO  | Task 55eddf99-c261-469c-9bd3-2a23a09ff428 is in state STARTED 2026-04-01 00:57:00.161905 | orchestrator | 2026-04-01 00:57:00 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:57:03.201354 | orchestrator | 2026-04-01 00:57:03 | INFO  | Task ce2837ac-6bb1-4072-9755-f78624891ac2 is in state STARTED 2026-04-01 00:57:03.201696 | orchestrator | 2026-04-01 00:57:03 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:57:03.202448 | orchestrator | 2026-04-01 00:57:03 | INFO  | Task 55eddf99-c261-469c-9bd3-2a23a09ff428 is in state STARTED 2026-04-01 00:57:03.202485 | orchestrator | 2026-04-01 00:57:03 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:57:06.232461 | orchestrator | 2026-04-01 00:57:06 | INFO  | Task ce2837ac-6bb1-4072-9755-f78624891ac2 is in state STARTED 2026-04-01 00:57:06.232866 | orchestrator | 2026-04-01 00:57:06 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:57:06.233725 | orchestrator | 2026-04-01 00:57:06 | INFO  | Task 55eddf99-c261-469c-9bd3-2a23a09ff428 is in state STARTED 2026-04-01 00:57:06.233863 | orchestrator | 2026-04-01 00:57:06 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:57:09.257838 | orchestrator | 2026-04-01 00:57:09 | INFO  | Task ce2837ac-6bb1-4072-9755-f78624891ac2 is in state STARTED 2026-04-01 00:57:09.258986 | orchestrator | 2026-04-01 00:57:09 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:57:09.260181 | orchestrator | 2026-04-01 00:57:09 | INFO  | Task 55eddf99-c261-469c-9bd3-2a23a09ff428 is in state STARTED 2026-04-01 00:57:09.260210 | orchestrator | 2026-04-01 00:57:09 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:57:12.300468 | orchestrator | 2026-04-01 00:57:12 | INFO  | Task ce2837ac-6bb1-4072-9755-f78624891ac2 is in state STARTED 2026-04-01 00:57:12.305337 | orchestrator | 2026-04-01 00:57:12 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:57:12.306265 | orchestrator | 2026-04-01 00:57:12 | INFO  | Task 55eddf99-c261-469c-9bd3-2a23a09ff428 is in state STARTED 2026-04-01 00:57:12.306364 | orchestrator | 2026-04-01 00:57:12 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:57:15.346750 | orchestrator | 2026-04-01 00:57:15 | INFO  | Task ce2837ac-6bb1-4072-9755-f78624891ac2 is in state STARTED 2026-04-01 00:57:15.348783 | orchestrator | 2026-04-01 00:57:15 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:57:15.348821 | orchestrator | 2026-04-01 00:57:15 | INFO  | Task 55eddf99-c261-469c-9bd3-2a23a09ff428 is in state STARTED 2026-04-01 00:57:15.348827 | orchestrator | 2026-04-01 00:57:15 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:57:18.387254 | orchestrator | 2026-04-01 00:57:18 | INFO  | Task ce2837ac-6bb1-4072-9755-f78624891ac2 is in state STARTED 2026-04-01 00:57:18.387350 | orchestrator | 2026-04-01 00:57:18 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:57:18.387361 | orchestrator | 2026-04-01 00:57:18 | INFO  | Task 55eddf99-c261-469c-9bd3-2a23a09ff428 is in state STARTED 2026-04-01 00:57:18.387369 | orchestrator | 2026-04-01 00:57:18 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:57:21.415009 | orchestrator | 2026-04-01 00:57:21 | INFO  | Task ce2837ac-6bb1-4072-9755-f78624891ac2 is in state STARTED 2026-04-01 00:57:21.415981 | orchestrator | 2026-04-01 00:57:21 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:57:21.417883 | orchestrator | 2026-04-01 00:57:21 | INFO  | Task 55eddf99-c261-469c-9bd3-2a23a09ff428 is in state STARTED 2026-04-01 00:57:21.417995 | orchestrator | 2026-04-01 00:57:21 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:57:24.452425 | orchestrator | 2026-04-01 00:57:24 | INFO  | Task ce2837ac-6bb1-4072-9755-f78624891ac2 is in state STARTED 2026-04-01 00:57:24.452924 | orchestrator | 2026-04-01 00:57:24 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:57:24.454621 | orchestrator | 2026-04-01 00:57:24 | INFO  | Task 55eddf99-c261-469c-9bd3-2a23a09ff428 is in state STARTED 2026-04-01 00:57:24.454661 | orchestrator | 2026-04-01 00:57:24 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:57:27.489776 | orchestrator | 2026-04-01 00:57:27 | INFO  | Task ce2837ac-6bb1-4072-9755-f78624891ac2 is in state STARTED 2026-04-01 00:57:27.494046 | orchestrator | 2026-04-01 00:57:27 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:57:27.495636 | orchestrator | 2026-04-01 00:57:27 | INFO  | Task 55eddf99-c261-469c-9bd3-2a23a09ff428 is in state STARTED 2026-04-01 00:57:27.495716 | orchestrator | 2026-04-01 00:57:27 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:57:30.527335 | orchestrator | 2026-04-01 00:57:30 | INFO  | Task ce2837ac-6bb1-4072-9755-f78624891ac2 is in state STARTED 2026-04-01 00:57:30.527407 | orchestrator | 2026-04-01 00:57:30 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:57:30.527414 | orchestrator | 2026-04-01 00:57:30 | INFO  | Task 55eddf99-c261-469c-9bd3-2a23a09ff428 is in state STARTED 2026-04-01 00:57:30.527419 | orchestrator | 2026-04-01 00:57:30 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:57:33.564928 | orchestrator | 2026-04-01 00:57:33 | INFO  | Task ce2837ac-6bb1-4072-9755-f78624891ac2 is in state STARTED 2026-04-01 00:57:33.566755 | orchestrator | 2026-04-01 00:57:33 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:57:33.568214 | orchestrator | 2026-04-01 00:57:33 | INFO  | Task 55eddf99-c261-469c-9bd3-2a23a09ff428 is in state SUCCESS 2026-04-01 00:57:33.570140 | orchestrator | 2026-04-01 00:57:33.570221 | orchestrator | 2026-04-01 00:57:33.570236 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-01 00:57:33.570249 | orchestrator | 2026-04-01 00:57:33.570262 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-01 00:57:33.570275 | orchestrator | Wednesday 01 April 2026 00:57:01 +0000 (0:00:00.313) 0:00:00.313 ******* 2026-04-01 00:57:33.570672 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:57:33.570687 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:57:33.570698 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:57:33.570709 | orchestrator | 2026-04-01 00:57:33.570721 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-01 00:57:33.570733 | orchestrator | Wednesday 01 April 2026 00:57:01 +0000 (0:00:00.289) 0:00:00.602 ******* 2026-04-01 00:57:33.570745 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-04-01 00:57:33.570756 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-04-01 00:57:33.570767 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-04-01 00:57:33.570778 | orchestrator | 2026-04-01 00:57:33.570801 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-04-01 00:57:33.570811 | orchestrator | 2026-04-01 00:57:33.570818 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-01 00:57:33.570825 | orchestrator | Wednesday 01 April 2026 00:57:01 +0000 (0:00:00.305) 0:00:00.908 ******* 2026-04-01 00:57:33.570832 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:57:33.570840 | orchestrator | 2026-04-01 00:57:33.570847 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-04-01 00:57:33.570854 | orchestrator | Wednesday 01 April 2026 00:57:02 +0000 (0:00:00.581) 0:00:01.489 ******* 2026-04-01 00:57:33.570866 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-01 00:57:33.570918 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-01 00:57:33.570935 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-01 00:57:33.570956 | orchestrator | 2026-04-01 00:57:33.570969 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-04-01 00:57:33.570981 | orchestrator | Wednesday 01 April 2026 00:57:03 +0000 (0:00:01.567) 0:00:03.057 ******* 2026-04-01 00:57:33.570989 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:57:33.570996 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:57:33.571003 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:57:33.571009 | orchestrator | 2026-04-01 00:57:33.571023 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-01 00:57:33.571030 | orchestrator | Wednesday 01 April 2026 00:57:04 +0000 (0:00:00.244) 0:00:03.302 ******* 2026-04-01 00:57:33.571037 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-04-01 00:57:33.571044 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-04-01 00:57:33.571051 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-04-01 00:57:33.571057 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-04-01 00:57:33.571064 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-04-01 00:57:33.571071 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-04-01 00:57:33.571081 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-04-01 00:57:33.571088 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-04-01 00:57:33.571095 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-04-01 00:57:33.571101 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-04-01 00:57:33.571108 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-04-01 00:57:33.571115 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-04-01 00:57:33.571122 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-04-01 00:57:33.571137 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-04-01 00:57:33.571189 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-04-01 00:57:33.571200 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-04-01 00:57:33.571206 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-04-01 00:57:33.571213 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-04-01 00:57:33.571220 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-04-01 00:57:33.571227 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-04-01 00:57:33.571234 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-04-01 00:57:33.571241 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-04-01 00:57:33.571247 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-04-01 00:57:33.571254 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-04-01 00:57:33.571261 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-04-01 00:57:33.571270 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-04-01 00:57:33.571279 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-04-01 00:57:33.571287 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-04-01 00:57:33.571295 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-04-01 00:57:33.571303 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-04-01 00:57:33.571311 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-04-01 00:57:33.571319 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-04-01 00:57:33.571326 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-04-01 00:57:33.571335 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-04-01 00:57:33.571343 | orchestrator | 2026-04-01 00:57:33.571351 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-01 00:57:33.571359 | orchestrator | Wednesday 01 April 2026 00:57:04 +0000 (0:00:00.602) 0:00:03.905 ******* 2026-04-01 00:57:33.571366 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:57:33.571375 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:57:33.571393 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:57:33.571410 | orchestrator | 2026-04-01 00:57:33.571424 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-01 00:57:33.571436 | orchestrator | Wednesday 01 April 2026 00:57:05 +0000 (0:00:00.357) 0:00:04.262 ******* 2026-04-01 00:57:33.571448 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:57:33.571460 | orchestrator | 2026-04-01 00:57:33.571472 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-01 00:57:33.571491 | orchestrator | Wednesday 01 April 2026 00:57:05 +0000 (0:00:00.101) 0:00:04.364 ******* 2026-04-01 00:57:33.571505 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:57:33.571518 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:57:33.571529 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:57:33.571540 | orchestrator | 2026-04-01 00:57:33.571548 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-01 00:57:33.571556 | orchestrator | Wednesday 01 April 2026 00:57:05 +0000 (0:00:00.247) 0:00:04.612 ******* 2026-04-01 00:57:33.571564 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:57:33.571576 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:57:33.571584 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:57:33.571592 | orchestrator | 2026-04-01 00:57:33.571600 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-01 00:57:33.571609 | orchestrator | Wednesday 01 April 2026 00:57:05 +0000 (0:00:00.235) 0:00:04.848 ******* 2026-04-01 00:57:33.571617 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:57:33.571624 | orchestrator | 2026-04-01 00:57:33.571632 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-01 00:57:33.571643 | orchestrator | Wednesday 01 April 2026 00:57:05 +0000 (0:00:00.111) 0:00:04.959 ******* 2026-04-01 00:57:33.571655 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:57:33.571666 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:57:33.571676 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:57:33.571688 | orchestrator | 2026-04-01 00:57:33.571699 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-01 00:57:33.571708 | orchestrator | Wednesday 01 April 2026 00:57:06 +0000 (0:00:00.447) 0:00:05.406 ******* 2026-04-01 00:57:33.571717 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:57:33.571727 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:57:33.571738 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:57:33.571748 | orchestrator | 2026-04-01 00:57:33.571757 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-01 00:57:33.571767 | orchestrator | Wednesday 01 April 2026 00:57:06 +0000 (0:00:00.253) 0:00:05.660 ******* 2026-04-01 00:57:33.571777 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:57:33.571788 | orchestrator | 2026-04-01 00:57:33.571799 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-01 00:57:33.571809 | orchestrator | Wednesday 01 April 2026 00:57:06 +0000 (0:00:00.097) 0:00:05.757 ******* 2026-04-01 00:57:33.571820 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:57:33.571831 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:57:33.571843 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:57:33.571854 | orchestrator | 2026-04-01 00:57:33.571865 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-01 00:57:33.571876 | orchestrator | Wednesday 01 April 2026 00:57:06 +0000 (0:00:00.253) 0:00:06.010 ******* 2026-04-01 00:57:33.571888 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:57:33.571898 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:57:33.571911 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:57:33.571923 | orchestrator | 2026-04-01 00:57:33.571935 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-01 00:57:33.571948 | orchestrator | Wednesday 01 April 2026 00:57:07 +0000 (0:00:00.309) 0:00:06.319 ******* 2026-04-01 00:57:33.571959 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:57:33.571971 | orchestrator | 2026-04-01 00:57:33.571983 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-01 00:57:33.571994 | orchestrator | Wednesday 01 April 2026 00:57:07 +0000 (0:00:00.099) 0:00:06.419 ******* 2026-04-01 00:57:33.572005 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:57:33.572016 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:57:33.572026 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:57:33.572037 | orchestrator | 2026-04-01 00:57:33.572049 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-01 00:57:33.572068 | orchestrator | Wednesday 01 April 2026 00:57:07 +0000 (0:00:00.345) 0:00:06.764 ******* 2026-04-01 00:57:33.572080 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:57:33.572090 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:57:33.572102 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:57:33.572113 | orchestrator | 2026-04-01 00:57:33.572125 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-01 00:57:33.572136 | orchestrator | Wednesday 01 April 2026 00:57:07 +0000 (0:00:00.268) 0:00:07.032 ******* 2026-04-01 00:57:33.572148 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:57:33.572178 | orchestrator | 2026-04-01 00:57:33.572190 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-01 00:57:33.572202 | orchestrator | Wednesday 01 April 2026 00:57:08 +0000 (0:00:00.102) 0:00:07.135 ******* 2026-04-01 00:57:33.572215 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:57:33.572227 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:57:33.572238 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:57:33.572249 | orchestrator | 2026-04-01 00:57:33.572260 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-01 00:57:33.572272 | orchestrator | Wednesday 01 April 2026 00:57:08 +0000 (0:00:00.268) 0:00:07.404 ******* 2026-04-01 00:57:33.572284 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:57:33.572297 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:57:33.572309 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:57:33.572322 | orchestrator | 2026-04-01 00:57:33.572334 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-01 00:57:33.572345 | orchestrator | Wednesday 01 April 2026 00:57:08 +0000 (0:00:00.313) 0:00:07.717 ******* 2026-04-01 00:57:33.572356 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:57:33.572367 | orchestrator | 2026-04-01 00:57:33.572378 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-01 00:57:33.572390 | orchestrator | Wednesday 01 April 2026 00:57:08 +0000 (0:00:00.122) 0:00:07.840 ******* 2026-04-01 00:57:33.572412 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:57:33.572426 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:57:33.572436 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:57:33.572446 | orchestrator | 2026-04-01 00:57:33.572457 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-01 00:57:33.572468 | orchestrator | Wednesday 01 April 2026 00:57:09 +0000 (0:00:00.446) 0:00:08.287 ******* 2026-04-01 00:57:33.572478 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:57:33.572489 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:57:33.572500 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:57:33.572511 | orchestrator | 2026-04-01 00:57:33.572521 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-01 00:57:33.572531 | orchestrator | Wednesday 01 April 2026 00:57:09 +0000 (0:00:00.312) 0:00:08.600 ******* 2026-04-01 00:57:33.572542 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:57:33.572553 | orchestrator | 2026-04-01 00:57:33.572564 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-01 00:57:33.572582 | orchestrator | Wednesday 01 April 2026 00:57:09 +0000 (0:00:00.133) 0:00:08.733 ******* 2026-04-01 00:57:33.572593 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:57:33.572604 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:57:33.572614 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:57:33.572625 | orchestrator | 2026-04-01 00:57:33.572636 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-01 00:57:33.572647 | orchestrator | Wednesday 01 April 2026 00:57:09 +0000 (0:00:00.288) 0:00:09.022 ******* 2026-04-01 00:57:33.572658 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:57:33.572669 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:57:33.572680 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:57:33.572691 | orchestrator | 2026-04-01 00:57:33.572702 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-01 00:57:33.572723 | orchestrator | Wednesday 01 April 2026 00:57:10 +0000 (0:00:00.398) 0:00:09.420 ******* 2026-04-01 00:57:33.572734 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:57:33.572744 | orchestrator | 2026-04-01 00:57:33.572757 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-01 00:57:33.572768 | orchestrator | Wednesday 01 April 2026 00:57:10 +0000 (0:00:00.403) 0:00:09.823 ******* 2026-04-01 00:57:33.572779 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:57:33.572791 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:57:33.572802 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:57:33.572814 | orchestrator | 2026-04-01 00:57:33.572826 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-01 00:57:33.572837 | orchestrator | Wednesday 01 April 2026 00:57:11 +0000 (0:00:00.341) 0:00:10.164 ******* 2026-04-01 00:57:33.572849 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:57:33.572856 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:57:33.572863 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:57:33.572870 | orchestrator | 2026-04-01 00:57:33.572878 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-01 00:57:33.572885 | orchestrator | Wednesday 01 April 2026 00:57:11 +0000 (0:00:00.398) 0:00:10.563 ******* 2026-04-01 00:57:33.572891 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:57:33.572898 | orchestrator | 2026-04-01 00:57:33.572905 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-01 00:57:33.572912 | orchestrator | Wednesday 01 April 2026 00:57:11 +0000 (0:00:00.126) 0:00:10.689 ******* 2026-04-01 00:57:33.572919 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:57:33.572926 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:57:33.572933 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:57:33.572939 | orchestrator | 2026-04-01 00:57:33.572946 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-01 00:57:33.572953 | orchestrator | Wednesday 01 April 2026 00:57:11 +0000 (0:00:00.274) 0:00:10.964 ******* 2026-04-01 00:57:33.572960 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:57:33.572967 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:57:33.572974 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:57:33.572980 | orchestrator | 2026-04-01 00:57:33.572987 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-01 00:57:33.572994 | orchestrator | Wednesday 01 April 2026 00:57:12 +0000 (0:00:00.689) 0:00:11.654 ******* 2026-04-01 00:57:33.573001 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:57:33.573008 | orchestrator | 2026-04-01 00:57:33.573017 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-01 00:57:33.573028 | orchestrator | Wednesday 01 April 2026 00:57:12 +0000 (0:00:00.133) 0:00:11.788 ******* 2026-04-01 00:57:33.573040 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:57:33.573051 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:57:33.573062 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:57:33.573075 | orchestrator | 2026-04-01 00:57:33.573086 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-04-01 00:57:33.573098 | orchestrator | Wednesday 01 April 2026 00:57:12 +0000 (0:00:00.294) 0:00:12.083 ******* 2026-04-01 00:57:33.573109 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:57:33.573116 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:57:33.573123 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:57:33.573129 | orchestrator | 2026-04-01 00:57:33.573136 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-04-01 00:57:33.573143 | orchestrator | Wednesday 01 April 2026 00:57:14 +0000 (0:00:01.611) 0:00:13.695 ******* 2026-04-01 00:57:33.573168 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-04-01 00:57:33.573176 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-04-01 00:57:33.573183 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-04-01 00:57:33.573196 | orchestrator | 2026-04-01 00:57:33.573203 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-04-01 00:57:33.573210 | orchestrator | Wednesday 01 April 2026 00:57:16 +0000 (0:00:01.723) 0:00:15.418 ******* 2026-04-01 00:57:33.573217 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-04-01 00:57:33.573233 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-04-01 00:57:33.573240 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-04-01 00:57:33.573247 | orchestrator | 2026-04-01 00:57:33.573254 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-04-01 00:57:33.573260 | orchestrator | Wednesday 01 April 2026 00:57:19 +0000 (0:00:02.722) 0:00:18.141 ******* 2026-04-01 00:57:33.573268 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-04-01 00:57:33.573274 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-04-01 00:57:33.573281 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-04-01 00:57:33.573288 | orchestrator | 2026-04-01 00:57:33.573300 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-04-01 00:57:33.573307 | orchestrator | Wednesday 01 April 2026 00:57:20 +0000 (0:00:01.763) 0:00:19.905 ******* 2026-04-01 00:57:33.573314 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:57:33.573321 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:57:33.573328 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:57:33.573334 | orchestrator | 2026-04-01 00:57:33.573341 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-04-01 00:57:33.573348 | orchestrator | Wednesday 01 April 2026 00:57:21 +0000 (0:00:00.286) 0:00:20.191 ******* 2026-04-01 00:57:33.573355 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:57:33.573361 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:57:33.573368 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:57:33.573375 | orchestrator | 2026-04-01 00:57:33.573382 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-01 00:57:33.573389 | orchestrator | Wednesday 01 April 2026 00:57:21 +0000 (0:00:00.269) 0:00:20.460 ******* 2026-04-01 00:57:33.573395 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:57:33.573403 | orchestrator | 2026-04-01 00:57:33.573409 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-04-01 00:57:33.573416 | orchestrator | Wednesday 01 April 2026 00:57:22 +0000 (0:00:00.769) 0:00:21.230 ******* 2026-04-01 00:57:33.573425 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-01 00:57:33.573453 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-01 00:57:33.573467 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-01 00:57:33.573480 | orchestrator | 2026-04-01 00:57:33.573488 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-04-01 00:57:33.573495 | orchestrator | Wednesday 01 April 2026 00:57:24 +0000 (0:00:01.860) 0:00:23.091 ******* 2026-04-01 00:57:33.573506 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-01 00:57:33.573514 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:57:33.573529 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-01 00:57:33.573541 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:57:33.573549 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-01 00:57:33.573560 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:57:33.573567 | orchestrator | 2026-04-01 00:57:33.573574 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-04-01 00:57:33.573581 | orchestrator | Wednesday 01 April 2026 00:57:24 +0000 (0:00:00.717) 0:00:23.808 ******* 2026-04-01 00:57:33.573613 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-01 00:57:33.573622 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:57:33.573630 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-01 00:57:33.573641 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:57:33.573658 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-01 00:57:33.573666 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:57:33.573673 | orchestrator | 2026-04-01 00:57:33.573680 | orchestrator | TASK [service-check-containers : horizon | Check containers] ******************* 2026-04-01 00:57:33.573687 | orchestrator | Wednesday 01 April 2026 00:57:26 +0000 (0:00:01.469) 0:00:25.278 ******* 2026-04-01 00:57:33.573698 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-01 00:57:33.573713 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-01 00:57:33.573731 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-01 00:57:33.573739 | orchestrator | 2026-04-01 00:57:33.573747 | orchestrator | TASK [service-check-containers : horizon | Notify handlers to restart containers] *** 2026-04-01 00:57:33.573754 | orchestrator | Wednesday 01 April 2026 00:57:27 +0000 (0:00:01.396) 0:00:26.675 ******* 2026-04-01 00:57:33.573761 | orchestrator | changed: [testbed-node-0] => { 2026-04-01 00:57:33.573771 | orchestrator |  "msg": "Notifying handlers" 2026-04-01 00:57:33.573778 | orchestrator | } 2026-04-01 00:57:33.573785 | orchestrator | changed: [testbed-node-1] => { 2026-04-01 00:57:33.573792 | orchestrator |  "msg": "Notifying handlers" 2026-04-01 00:57:33.573799 | orchestrator | } 2026-04-01 00:57:33.573806 | orchestrator | changed: [testbed-node-2] => { 2026-04-01 00:57:33.573813 | orchestrator |  "msg": "Notifying handlers" 2026-04-01 00:57:33.573819 | orchestrator | } 2026-04-01 00:57:33.573826 | orchestrator | 2026-04-01 00:57:33.573833 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-01 00:57:33.573840 | orchestrator | Wednesday 01 April 2026 00:57:27 +0000 (0:00:00.351) 0:00:27.027 ******* 2026-04-01 00:57:33.573848 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-01 00:57:33.573859 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:57:33.573875 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-01 00:57:33.573883 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:57:33.573897 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-01 00:57:33.573904 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:57:33.573911 | orchestrator | 2026-04-01 00:57:33.573918 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-01 00:57:33.573925 | orchestrator | Wednesday 01 April 2026 00:57:29 +0000 (0:00:01.611) 0:00:28.638 ******* 2026-04-01 00:57:33.573932 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:57:33.573939 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:57:33.573946 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:57:33.573953 | orchestrator | 2026-04-01 00:57:33.573963 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-01 00:57:33.573971 | orchestrator | Wednesday 01 April 2026 00:57:29 +0000 (0:00:00.349) 0:00:28.988 ******* 2026-04-01 00:57:33.573978 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:57:33.573985 | orchestrator | 2026-04-01 00:57:33.573992 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-04-01 00:57:33.573999 | orchestrator | Wednesday 01 April 2026 00:57:30 +0000 (0:00:00.648) 0:00:29.636 ******* 2026-04-01 00:57:33.574006 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "msg": "kolla_toolbox container is missing or not running!"} 2026-04-01 00:57:33.574040 | orchestrator | 2026-04-01 00:57:33.574055 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 00:57:33.574072 | orchestrator | testbed-node-0 : ok=34  changed=8  unreachable=0 failed=1  skipped=26  rescued=0 ignored=0 2026-04-01 00:57:33.574085 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2026-04-01 00:57:33.574105 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2026-04-01 00:57:33.574113 | orchestrator | 2026-04-01 00:57:33.574120 | orchestrator | 2026-04-01 00:57:33.574127 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 00:57:33.574134 | orchestrator | Wednesday 01 April 2026 00:57:31 +0000 (0:00:00.767) 0:00:30.404 ******* 2026-04-01 00:57:33.574140 | orchestrator | =============================================================================== 2026-04-01 00:57:33.574147 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.72s 2026-04-01 00:57:33.574198 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.86s 2026-04-01 00:57:33.574205 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.76s 2026-04-01 00:57:33.574212 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.72s 2026-04-01 00:57:33.574219 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.61s 2026-04-01 00:57:33.574226 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.61s 2026-04-01 00:57:33.574233 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.57s 2026-04-01 00:57:33.574239 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 1.47s 2026-04-01 00:57:33.574246 | orchestrator | service-check-containers : horizon | Check containers ------------------- 1.40s 2026-04-01 00:57:33.574253 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.77s 2026-04-01 00:57:33.574259 | orchestrator | horizon : Creating Horizon database ------------------------------------- 0.77s 2026-04-01 00:57:33.574266 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.72s 2026-04-01 00:57:33.574273 | orchestrator | horizon : Update policy file name --------------------------------------- 0.69s 2026-04-01 00:57:33.574280 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.65s 2026-04-01 00:57:33.574286 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.60s 2026-04-01 00:57:33.574293 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.58s 2026-04-01 00:57:33.574300 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.45s 2026-04-01 00:57:33.574307 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.45s 2026-04-01 00:57:33.574313 | orchestrator | horizon : Check if policies shall be overwritten ------------------------ 0.40s 2026-04-01 00:57:33.574320 | orchestrator | horizon : Update policy file name --------------------------------------- 0.40s 2026-04-01 00:57:33.574327 | orchestrator | 2026-04-01 00:57:33 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:57:36.618440 | orchestrator | 2026-04-01 00:57:36 | INFO  | Task ce2837ac-6bb1-4072-9755-f78624891ac2 is in state STARTED 2026-04-01 00:57:36.619597 | orchestrator | 2026-04-01 00:57:36 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:57:36.619648 | orchestrator | 2026-04-01 00:57:36 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:57:39.657274 | orchestrator | 2026-04-01 00:57:39 | INFO  | Task ce2837ac-6bb1-4072-9755-f78624891ac2 is in state STARTED 2026-04-01 00:57:39.657771 | orchestrator | 2026-04-01 00:57:39 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:57:39.657833 | orchestrator | 2026-04-01 00:57:39 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:57:42.696021 | orchestrator | 2026-04-01 00:57:42 | INFO  | Task ce2837ac-6bb1-4072-9755-f78624891ac2 is in state STARTED 2026-04-01 00:57:42.698335 | orchestrator | 2026-04-01 00:57:42 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:57:42.698402 | orchestrator | 2026-04-01 00:57:42 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:57:45.749889 | orchestrator | 2026-04-01 00:57:45 | INFO  | Task ce2837ac-6bb1-4072-9755-f78624891ac2 is in state STARTED 2026-04-01 00:57:45.753684 | orchestrator | 2026-04-01 00:57:45 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:57:45.753734 | orchestrator | 2026-04-01 00:57:45 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:57:48.790894 | orchestrator | 2026-04-01 00:57:48 | INFO  | Task fe468b18-05e1-427a-b535-2a1b15d29d0f is in state STARTED 2026-04-01 00:57:48.792792 | orchestrator | 2026-04-01 00:57:48.792860 | orchestrator | 2026-04-01 00:57:48 | INFO  | Task ce2837ac-6bb1-4072-9755-f78624891ac2 is in state SUCCESS 2026-04-01 00:57:48.794518 | orchestrator | 2026-04-01 00:57:48.794574 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-01 00:57:48.794583 | orchestrator | 2026-04-01 00:57:48.794603 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-01 00:57:48.794610 | orchestrator | Wednesday 01 April 2026 00:57:01 +0000 (0:00:00.323) 0:00:00.323 ******* 2026-04-01 00:57:48.794616 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:57:48.795068 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:57:48.795090 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:57:48.795094 | orchestrator | 2026-04-01 00:57:48.795099 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-01 00:57:48.795103 | orchestrator | Wednesday 01 April 2026 00:57:01 +0000 (0:00:00.271) 0:00:00.594 ******* 2026-04-01 00:57:48.795108 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-04-01 00:57:48.795113 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-04-01 00:57:48.795117 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-04-01 00:57:48.795121 | orchestrator | 2026-04-01 00:57:48.795126 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-04-01 00:57:48.795130 | orchestrator | 2026-04-01 00:57:48.795134 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-01 00:57:48.795139 | orchestrator | Wednesday 01 April 2026 00:57:01 +0000 (0:00:00.301) 0:00:00.896 ******* 2026-04-01 00:57:48.795144 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:57:48.795149 | orchestrator | 2026-04-01 00:57:48.795153 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-04-01 00:57:48.795157 | orchestrator | Wednesday 01 April 2026 00:57:02 +0000 (0:00:00.662) 0:00:01.559 ******* 2026-04-01 00:57:48.795165 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-01 00:57:48.795173 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-01 00:57:48.795194 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-01 00:57:48.795223 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-01 00:57:48.795229 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-01 00:57:48.795234 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-01 00:57:48.795239 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-01 00:57:48.795385 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-01 00:57:48.795415 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-01 00:57:48.795424 | orchestrator | 2026-04-01 00:57:48.795431 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-04-01 00:57:48.795441 | orchestrator | Wednesday 01 April 2026 00:57:04 +0000 (0:00:02.350) 0:00:03.910 ******* 2026-04-01 00:57:48.795447 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:57:48.795455 | orchestrator | 2026-04-01 00:57:48.795461 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-04-01 00:57:48.795466 | orchestrator | Wednesday 01 April 2026 00:57:05 +0000 (0:00:00.106) 0:00:04.016 ******* 2026-04-01 00:57:48.795472 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:57:48.795478 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:57:48.795484 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:57:48.795490 | orchestrator | 2026-04-01 00:57:48.795496 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-04-01 00:57:48.795501 | orchestrator | Wednesday 01 April 2026 00:57:05 +0000 (0:00:00.253) 0:00:04.270 ******* 2026-04-01 00:57:48.795507 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-01 00:57:48.795513 | orchestrator | 2026-04-01 00:57:48.795519 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-01 00:57:48.795525 | orchestrator | Wednesday 01 April 2026 00:57:06 +0000 (0:00:00.881) 0:00:05.152 ******* 2026-04-01 00:57:48.795548 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:57:48.795554 | orchestrator | 2026-04-01 00:57:48.795560 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-04-01 00:57:48.795566 | orchestrator | Wednesday 01 April 2026 00:57:06 +0000 (0:00:00.614) 0:00:05.766 ******* 2026-04-01 00:57:48.795574 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-01 00:57:48.795589 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-01 00:57:48.795607 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-01 00:57:48.795614 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-01 00:57:48.795621 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-01 00:57:48.795635 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-01 00:57:48.795642 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-01 00:57:48.795649 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-01 00:57:48.795656 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-01 00:57:48.795662 | orchestrator | 2026-04-01 00:57:48.795674 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-04-01 00:57:48.795680 | orchestrator | Wednesday 01 April 2026 00:57:10 +0000 (0:00:03.321) 0:00:09.088 ******* 2026-04-01 00:57:48.795713 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-01 00:57:48.795728 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-01 00:57:48.795739 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-01 00:57:48.795746 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:57:48.795753 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-01 00:57:48.795770 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-01 00:57:48.795777 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-01 00:57:48.795784 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-01 00:57:48.795795 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:57:48.795812 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-01 00:57:48.795819 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-01 00:57:48.795825 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:57:48.795832 | orchestrator | 2026-04-01 00:57:48.795838 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-04-01 00:57:48.795844 | orchestrator | Wednesday 01 April 2026 00:57:10 +0000 (0:00:00.677) 0:00:09.765 ******* 2026-04-01 00:57:48.795858 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-01 00:57:48.795866 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-01 00:57:48.795876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-01 00:57:48.795883 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:57:48.795889 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-01 00:57:48.795896 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-01 00:57:48.795903 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-01 00:57:48.795909 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:57:48.795923 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-01 00:57:48.795934 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-01 00:57:48.795941 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-01 00:57:48.795947 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:57:48.795953 | orchestrator | 2026-04-01 00:57:48.795959 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-04-01 00:57:48.795965 | orchestrator | Wednesday 01 April 2026 00:57:11 +0000 (0:00:01.080) 0:00:10.846 ******* 2026-04-01 00:57:48.795972 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-01 00:57:48.795986 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-01 00:57:48.795996 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-01 00:57:48.796003 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-01 00:57:48.796010 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-01 00:57:48.796016 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-01 00:57:48.796026 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-01 00:57:48.796036 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-01 00:57:48.796048 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-01 00:57:48.796054 | orchestrator | 2026-04-01 00:57:48.796061 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-04-01 00:57:48.796068 | orchestrator | Wednesday 01 April 2026 00:57:15 +0000 (0:00:03.399) 0:00:14.245 ******* 2026-04-01 00:57:48.796074 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-01 00:57:48.796081 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-01 00:57:48.796095 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-01 00:57:48.796106 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-01 00:57:48.796113 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-01 00:57:48.796120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-01 00:57:48.796127 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-01 00:57:48.796133 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-01 00:57:48.796149 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-01 00:57:48.796159 | orchestrator | 2026-04-01 00:57:48.796166 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-04-01 00:57:48.796173 | orchestrator | Wednesday 01 April 2026 00:57:20 +0000 (0:00:05.205) 0:00:19.451 ******* 2026-04-01 00:57:48.796179 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:57:48.796185 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:57:48.796191 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:57:48.796198 | orchestrator | 2026-04-01 00:57:48.796204 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-04-01 00:57:48.796210 | orchestrator | Wednesday 01 April 2026 00:57:21 +0000 (0:00:01.442) 0:00:20.894 ******* 2026-04-01 00:57:48.796217 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:57:48.796223 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:57:48.796230 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:57:48.796236 | orchestrator | 2026-04-01 00:57:48.796262 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-04-01 00:57:48.796269 | orchestrator | Wednesday 01 April 2026 00:57:22 +0000 (0:00:00.800) 0:00:21.694 ******* 2026-04-01 00:57:48.796275 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:57:48.796282 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:57:48.796382 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:57:48.796390 | orchestrator | 2026-04-01 00:57:48.796396 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-04-01 00:57:48.796403 | orchestrator | Wednesday 01 April 2026 00:57:23 +0000 (0:00:00.538) 0:00:22.233 ******* 2026-04-01 00:57:48.796409 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:57:48.796415 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:57:48.796421 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:57:48.796427 | orchestrator | 2026-04-01 00:57:48.796432 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-04-01 00:57:48.796438 | orchestrator | Wednesday 01 April 2026 00:57:23 +0000 (0:00:00.281) 0:00:22.514 ******* 2026-04-01 00:57:48.796445 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-01 00:57:48.796452 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-01 00:57:48.796465 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-01 00:57:48.796471 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:57:48.796487 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-01 00:57:48.796495 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-01 00:57:48.796502 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-01 00:57:48.796508 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:57:48.796515 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-01 00:57:48.796526 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-01 00:57:48.796540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-01 00:57:48.796546 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:57:48.796553 | orchestrator | 2026-04-01 00:57:48.796559 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-01 00:57:48.796565 | orchestrator | Wednesday 01 April 2026 00:57:24 +0000 (0:00:00.524) 0:00:23.038 ******* 2026-04-01 00:57:48.796572 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:57:48.796578 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:57:48.796584 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:57:48.796590 | orchestrator | 2026-04-01 00:57:48.796596 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-04-01 00:57:48.796602 | orchestrator | Wednesday 01 April 2026 00:57:24 +0000 (0:00:00.465) 0:00:23.504 ******* 2026-04-01 00:57:48.796608 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-04-01 00:57:48.796615 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-04-01 00:57:48.796621 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-04-01 00:57:48.796628 | orchestrator | 2026-04-01 00:57:48.796634 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-04-01 00:57:48.796640 | orchestrator | Wednesday 01 April 2026 00:57:26 +0000 (0:00:02.175) 0:00:25.679 ******* 2026-04-01 00:57:48.796646 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-01 00:57:48.796652 | orchestrator | 2026-04-01 00:57:48.796658 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-04-01 00:57:48.796665 | orchestrator | Wednesday 01 April 2026 00:57:27 +0000 (0:00:00.971) 0:00:26.650 ******* 2026-04-01 00:57:48.796671 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:57:48.796677 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:57:48.796683 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:57:48.796689 | orchestrator | 2026-04-01 00:57:48.796695 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-04-01 00:57:48.796701 | orchestrator | Wednesday 01 April 2026 00:57:28 +0000 (0:00:00.774) 0:00:27.424 ******* 2026-04-01 00:57:48.796708 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-01 00:57:48.796714 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-01 00:57:48.796719 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-01 00:57:48.796730 | orchestrator | 2026-04-01 00:57:48.796736 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-04-01 00:57:48.796742 | orchestrator | Wednesday 01 April 2026 00:57:29 +0000 (0:00:01.445) 0:00:28.870 ******* 2026-04-01 00:57:48.796748 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:57:48.796755 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:57:48.796761 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:57:48.796767 | orchestrator | 2026-04-01 00:57:48.796773 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-04-01 00:57:48.796779 | orchestrator | Wednesday 01 April 2026 00:57:30 +0000 (0:00:00.317) 0:00:29.187 ******* 2026-04-01 00:57:48.796785 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-04-01 00:57:48.796791 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-04-01 00:57:48.796797 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-04-01 00:57:48.796803 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-04-01 00:57:48.796809 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-04-01 00:57:48.796815 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-04-01 00:57:48.796821 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-04-01 00:57:48.796828 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-04-01 00:57:48.796833 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-04-01 00:57:48.796837 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-04-01 00:57:48.796841 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-04-01 00:57:48.796845 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-04-01 00:57:48.796849 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-04-01 00:57:48.796853 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-04-01 00:57:48.796857 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-01 00:57:48.796864 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-04-01 00:57:48.796872 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-01 00:57:48.796877 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-01 00:57:48.796880 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-01 00:57:48.796884 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-01 00:57:48.796888 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-01 00:57:48.796892 | orchestrator | 2026-04-01 00:57:48.796897 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-04-01 00:57:48.796900 | orchestrator | Wednesday 01 April 2026 00:57:39 +0000 (0:00:09.292) 0:00:38.480 ******* 2026-04-01 00:57:48.796904 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-01 00:57:48.796908 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-01 00:57:48.796912 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-01 00:57:48.796920 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-01 00:57:48.796923 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-01 00:57:48.796927 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-01 00:57:48.796931 | orchestrator | 2026-04-01 00:57:48.796935 | orchestrator | TASK [service-check-containers : keystone | Check containers] ****************** 2026-04-01 00:57:48.796941 | orchestrator | Wednesday 01 April 2026 00:57:42 +0000 (0:00:02.828) 0:00:41.308 ******* 2026-04-01 00:57:48.796947 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-01 00:57:48.796953 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-01 00:57:48.796972 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-01 00:57:48.796986 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-01 00:57:48.796996 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-01 00:57:48.797002 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-01 00:57:48.797009 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-01 00:57:48.797014 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-01 00:57:48.797030 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-01 00:57:48.797037 | orchestrator | 2026-04-01 00:57:48.797043 | orchestrator | TASK [service-check-containers : keystone | Notify handlers to restart containers] *** 2026-04-01 00:57:48.797049 | orchestrator | Wednesday 01 April 2026 00:57:44 +0000 (0:00:02.332) 0:00:43.640 ******* 2026-04-01 00:57:48.797054 | orchestrator | changed: [testbed-node-0] => { 2026-04-01 00:57:48.797064 | orchestrator |  "msg": "Notifying handlers" 2026-04-01 00:57:48.797070 | orchestrator | } 2026-04-01 00:57:48.797076 | orchestrator | changed: [testbed-node-1] => { 2026-04-01 00:57:48.797082 | orchestrator |  "msg": "Notifying handlers" 2026-04-01 00:57:48.797087 | orchestrator | } 2026-04-01 00:57:48.797093 | orchestrator | changed: [testbed-node-2] => { 2026-04-01 00:57:48.797098 | orchestrator |  "msg": "Notifying handlers" 2026-04-01 00:57:48.797104 | orchestrator | } 2026-04-01 00:57:48.797109 | orchestrator | 2026-04-01 00:57:48.797115 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-01 00:57:48.797122 | orchestrator | Wednesday 01 April 2026 00:57:44 +0000 (0:00:00.332) 0:00:43.973 ******* 2026-04-01 00:57:48.797129 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-01 00:57:48.797136 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-01 00:57:48.797142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-01 00:57:48.797148 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:57:48.797162 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-01 00:57:48.797174 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-01 00:57:48.797181 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-01 00:57:48.797187 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:57:48.797192 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-01 00:57:48.797199 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-01 00:57:48.797205 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-01 00:57:48.797219 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:57:48.797225 | orchestrator | 2026-04-01 00:57:48.797231 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-01 00:57:48.797238 | orchestrator | Wednesday 01 April 2026 00:57:45 +0000 (0:00:00.927) 0:00:44.900 ******* 2026-04-01 00:57:48.797267 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:57:48.797275 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:57:48.797279 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:57:48.797282 | orchestrator | 2026-04-01 00:57:48.797289 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-04-01 00:57:48.797294 | orchestrator | Wednesday 01 April 2026 00:57:46 +0000 (0:00:00.293) 0:00:45.194 ******* 2026-04-01 00:57:48.797298 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "msg": "kolla_toolbox container is missing or not running!"} 2026-04-01 00:57:48.797302 | orchestrator | 2026-04-01 00:57:48.797306 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 00:57:48.797310 | orchestrator | testbed-node-0 : ok=18  changed=10  unreachable=0 failed=1  skipped=12  rescued=0 ignored=0 2026-04-01 00:57:48.797316 | orchestrator | testbed-node-1 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-04-01 00:57:48.797322 | orchestrator | testbed-node-2 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-04-01 00:57:48.797326 | orchestrator | 2026-04-01 00:57:48.797330 | orchestrator | 2026-04-01 00:57:48.797334 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 00:57:48.797338 | orchestrator | Wednesday 01 April 2026 00:57:46 +0000 (0:00:00.733) 0:00:45.928 ******* 2026-04-01 00:57:48.797342 | orchestrator | =============================================================================== 2026-04-01 00:57:48.797346 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 9.29s 2026-04-01 00:57:48.797350 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.21s 2026-04-01 00:57:48.797355 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.40s 2026-04-01 00:57:48.797359 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.32s 2026-04-01 00:57:48.797362 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.83s 2026-04-01 00:57:48.797366 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 2.35s 2026-04-01 00:57:48.797370 | orchestrator | service-check-containers : keystone | Check containers ------------------ 2.33s 2026-04-01 00:57:48.797374 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 2.18s 2026-04-01 00:57:48.797378 | orchestrator | keystone : Generate the required cron jobs for the node ----------------- 1.45s 2026-04-01 00:57:48.797451 | orchestrator | keystone : Copying keystone-startup script for keystone ----------------- 1.44s 2026-04-01 00:57:48.797456 | orchestrator | service-cert-copy : keystone | Copying over backend internal TLS key ---- 1.08s 2026-04-01 00:57:48.797460 | orchestrator | keystone : Checking whether keystone-paste.ini file exists -------------- 0.97s 2026-04-01 00:57:48.797464 | orchestrator | service-check-containers : Include tasks -------------------------------- 0.93s 2026-04-01 00:57:48.797468 | orchestrator | keystone : Check if Keystone domain-specific config is supplied --------- 0.88s 2026-04-01 00:57:48.797472 | orchestrator | keystone : Create Keystone domain-specific config directory ------------- 0.80s 2026-04-01 00:57:48.797476 | orchestrator | keystone : Copying over keystone-paste.ini ------------------------------ 0.77s 2026-04-01 00:57:48.797480 | orchestrator | keystone : Creating keystone database ----------------------------------- 0.73s 2026-04-01 00:57:48.797484 | orchestrator | service-cert-copy : keystone | Copying over backend internal TLS certificate --- 0.68s 2026-04-01 00:57:48.797488 | orchestrator | keystone : include_tasks ------------------------------------------------ 0.66s 2026-04-01 00:57:48.797496 | orchestrator | keystone : include_tasks ------------------------------------------------ 0.61s 2026-04-01 00:57:48.797500 | orchestrator | 2026-04-01 00:57:48 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:57:48.797504 | orchestrator | 2026-04-01 00:57:48 | INFO  | Task 9e5e9e52-7b68-404a-861c-60066871749f is in state STARTED 2026-04-01 00:57:48.797511 | orchestrator | 2026-04-01 00:57:48 | INFO  | Task 7be7858d-702f-479f-8c4d-49efcca9b9ee is in state STARTED 2026-04-01 00:57:48.798667 | orchestrator | 2026-04-01 00:57:48 | INFO  | Task 3ae40d44-34a5-4253-b94f-2fde97cfc8a2 is in state STARTED 2026-04-01 00:57:48.798731 | orchestrator | 2026-04-01 00:57:48 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:57:51.822537 | orchestrator | 2026-04-01 00:57:51 | INFO  | Task fe468b18-05e1-427a-b535-2a1b15d29d0f is in state STARTED 2026-04-01 00:57:51.824646 | orchestrator | 2026-04-01 00:57:51 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:57:51.827406 | orchestrator | 2026-04-01 00:57:51 | INFO  | Task 9e5e9e52-7b68-404a-861c-60066871749f is in state STARTED 2026-04-01 00:57:51.828065 | orchestrator | 2026-04-01 00:57:51 | INFO  | Task 7be7858d-702f-479f-8c4d-49efcca9b9ee is in state STARTED 2026-04-01 00:57:51.828998 | orchestrator | 2026-04-01 00:57:51 | INFO  | Task 3ae40d44-34a5-4253-b94f-2fde97cfc8a2 is in state STARTED 2026-04-01 00:57:51.829148 | orchestrator | 2026-04-01 00:57:51 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:57:54.863320 | orchestrator | 2026-04-01 00:57:54 | INFO  | Task fe468b18-05e1-427a-b535-2a1b15d29d0f is in state STARTED 2026-04-01 00:57:54.867386 | orchestrator | 2026-04-01 00:57:54 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:57:54.869352 | orchestrator | 2026-04-01 00:57:54 | INFO  | Task 9e5e9e52-7b68-404a-861c-60066871749f is in state STARTED 2026-04-01 00:57:54.871446 | orchestrator | 2026-04-01 00:57:54 | INFO  | Task 7be7858d-702f-479f-8c4d-49efcca9b9ee is in state STARTED 2026-04-01 00:57:54.873117 | orchestrator | 2026-04-01 00:57:54 | INFO  | Task 3ae40d44-34a5-4253-b94f-2fde97cfc8a2 is in state STARTED 2026-04-01 00:57:54.873404 | orchestrator | 2026-04-01 00:57:54 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:57:57.915163 | orchestrator | 2026-04-01 00:57:57 | INFO  | Task fe468b18-05e1-427a-b535-2a1b15d29d0f is in state STARTED 2026-04-01 00:57:57.916245 | orchestrator | 2026-04-01 00:57:57 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:57:57.920094 | orchestrator | 2026-04-01 00:57:57 | INFO  | Task 9e5e9e52-7b68-404a-861c-60066871749f is in state STARTED 2026-04-01 00:57:57.921623 | orchestrator | 2026-04-01 00:57:57 | INFO  | Task 7be7858d-702f-479f-8c4d-49efcca9b9ee is in state STARTED 2026-04-01 00:57:57.922997 | orchestrator | 2026-04-01 00:57:57 | INFO  | Task 3ae40d44-34a5-4253-b94f-2fde97cfc8a2 is in state STARTED 2026-04-01 00:57:57.923089 | orchestrator | 2026-04-01 00:57:57 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:58:00.973150 | orchestrator | 2026-04-01 00:58:00 | INFO  | Task fe468b18-05e1-427a-b535-2a1b15d29d0f is in state STARTED 2026-04-01 00:58:00.974944 | orchestrator | 2026-04-01 00:58:00 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:58:00.977924 | orchestrator | 2026-04-01 00:58:00 | INFO  | Task 9e5e9e52-7b68-404a-861c-60066871749f is in state STARTED 2026-04-01 00:58:00.981219 | orchestrator | 2026-04-01 00:58:00 | INFO  | Task 7be7858d-702f-479f-8c4d-49efcca9b9ee is in state STARTED 2026-04-01 00:58:00.983582 | orchestrator | 2026-04-01 00:58:00 | INFO  | Task 3ae40d44-34a5-4253-b94f-2fde97cfc8a2 is in state STARTED 2026-04-01 00:58:00.983672 | orchestrator | 2026-04-01 00:58:00 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:58:04.029948 | orchestrator | 2026-04-01 00:58:04 | INFO  | Task fe468b18-05e1-427a-b535-2a1b15d29d0f is in state STARTED 2026-04-01 00:58:04.032602 | orchestrator | 2026-04-01 00:58:04 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:58:04.034991 | orchestrator | 2026-04-01 00:58:04 | INFO  | Task 9e5e9e52-7b68-404a-861c-60066871749f is in state STARTED 2026-04-01 00:58:04.037398 | orchestrator | 2026-04-01 00:58:04 | INFO  | Task 7be7858d-702f-479f-8c4d-49efcca9b9ee is in state STARTED 2026-04-01 00:58:04.039544 | orchestrator | 2026-04-01 00:58:04 | INFO  | Task 3ae40d44-34a5-4253-b94f-2fde97cfc8a2 is in state STARTED 2026-04-01 00:58:04.039583 | orchestrator | 2026-04-01 00:58:04 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:58:07.094518 | orchestrator | 2026-04-01 00:58:07 | INFO  | Task fe468b18-05e1-427a-b535-2a1b15d29d0f is in state STARTED 2026-04-01 00:58:07.096198 | orchestrator | 2026-04-01 00:58:07 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:58:07.097728 | orchestrator | 2026-04-01 00:58:07 | INFO  | Task 9e5e9e52-7b68-404a-861c-60066871749f is in state STARTED 2026-04-01 00:58:07.098935 | orchestrator | 2026-04-01 00:58:07 | INFO  | Task 7be7858d-702f-479f-8c4d-49efcca9b9ee is in state STARTED 2026-04-01 00:58:07.100259 | orchestrator | 2026-04-01 00:58:07 | INFO  | Task 3ae40d44-34a5-4253-b94f-2fde97cfc8a2 is in state STARTED 2026-04-01 00:58:07.100605 | orchestrator | 2026-04-01 00:58:07 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:58:10.146238 | orchestrator | 2026-04-01 00:58:10 | INFO  | Task fe468b18-05e1-427a-b535-2a1b15d29d0f is in state STARTED 2026-04-01 00:58:10.147861 | orchestrator | 2026-04-01 00:58:10 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:58:10.149696 | orchestrator | 2026-04-01 00:58:10 | INFO  | Task 9e5e9e52-7b68-404a-861c-60066871749f is in state STARTED 2026-04-01 00:58:10.151500 | orchestrator | 2026-04-01 00:58:10 | INFO  | Task 7be7858d-702f-479f-8c4d-49efcca9b9ee is in state STARTED 2026-04-01 00:58:10.153820 | orchestrator | 2026-04-01 00:58:10 | INFO  | Task 3ae40d44-34a5-4253-b94f-2fde97cfc8a2 is in state STARTED 2026-04-01 00:58:10.153884 | orchestrator | 2026-04-01 00:58:10 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:58:13.200489 | orchestrator | 2026-04-01 00:58:13 | INFO  | Task fe468b18-05e1-427a-b535-2a1b15d29d0f is in state STARTED 2026-04-01 00:58:13.202789 | orchestrator | 2026-04-01 00:58:13 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:58:13.203362 | orchestrator | 2026-04-01 00:58:13 | INFO  | Task 9e5e9e52-7b68-404a-861c-60066871749f is in state STARTED 2026-04-01 00:58:13.205002 | orchestrator | 2026-04-01 00:58:13 | INFO  | Task 7be7858d-702f-479f-8c4d-49efcca9b9ee is in state STARTED 2026-04-01 00:58:13.206759 | orchestrator | 2026-04-01 00:58:13 | INFO  | Task 3ae40d44-34a5-4253-b94f-2fde97cfc8a2 is in state STARTED 2026-04-01 00:58:13.206800 | orchestrator | 2026-04-01 00:58:13 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:58:16.246588 | orchestrator | 2026-04-01 00:58:16 | INFO  | Task fe468b18-05e1-427a-b535-2a1b15d29d0f is in state STARTED 2026-04-01 00:58:16.247717 | orchestrator | 2026-04-01 00:58:16 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:58:16.249356 | orchestrator | 2026-04-01 00:58:16 | INFO  | Task 9e5e9e52-7b68-404a-861c-60066871749f is in state STARTED 2026-04-01 00:58:16.250774 | orchestrator | 2026-04-01 00:58:16 | INFO  | Task 7be7858d-702f-479f-8c4d-49efcca9b9ee is in state STARTED 2026-04-01 00:58:16.252033 | orchestrator | 2026-04-01 00:58:16 | INFO  | Task 3ae40d44-34a5-4253-b94f-2fde97cfc8a2 is in state STARTED 2026-04-01 00:58:16.252087 | orchestrator | 2026-04-01 00:58:16 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:58:19.305880 | orchestrator | 2026-04-01 00:58:19 | INFO  | Task fe468b18-05e1-427a-b535-2a1b15d29d0f is in state STARTED 2026-04-01 00:58:19.309734 | orchestrator | 2026-04-01 00:58:19 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:58:19.312095 | orchestrator | 2026-04-01 00:58:19 | INFO  | Task 9e5e9e52-7b68-404a-861c-60066871749f is in state STARTED 2026-04-01 00:58:19.314150 | orchestrator | 2026-04-01 00:58:19 | INFO  | Task 7be7858d-702f-479f-8c4d-49efcca9b9ee is in state STARTED 2026-04-01 00:58:19.316005 | orchestrator | 2026-04-01 00:58:19 | INFO  | Task 3ae40d44-34a5-4253-b94f-2fde97cfc8a2 is in state STARTED 2026-04-01 00:58:19.316289 | orchestrator | 2026-04-01 00:58:19 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:58:22.371479 | orchestrator | 2026-04-01 00:58:22 | INFO  | Task fe468b18-05e1-427a-b535-2a1b15d29d0f is in state STARTED 2026-04-01 00:58:22.374670 | orchestrator | 2026-04-01 00:58:22 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:58:22.377240 | orchestrator | 2026-04-01 00:58:22 | INFO  | Task 9e5e9e52-7b68-404a-861c-60066871749f is in state STARTED 2026-04-01 00:58:22.378900 | orchestrator | 2026-04-01 00:58:22 | INFO  | Task 7be7858d-702f-479f-8c4d-49efcca9b9ee is in state STARTED 2026-04-01 00:58:22.380672 | orchestrator | 2026-04-01 00:58:22 | INFO  | Task 3ae40d44-34a5-4253-b94f-2fde97cfc8a2 is in state STARTED 2026-04-01 00:58:22.380727 | orchestrator | 2026-04-01 00:58:22 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:58:25.424059 | orchestrator | 2026-04-01 00:58:25 | INFO  | Task fe468b18-05e1-427a-b535-2a1b15d29d0f is in state STARTED 2026-04-01 00:58:25.426309 | orchestrator | 2026-04-01 00:58:25 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:58:25.429289 | orchestrator | 2026-04-01 00:58:25 | INFO  | Task 9e5e9e52-7b68-404a-861c-60066871749f is in state STARTED 2026-04-01 00:58:25.431214 | orchestrator | 2026-04-01 00:58:25 | INFO  | Task 7be7858d-702f-479f-8c4d-49efcca9b9ee is in state STARTED 2026-04-01 00:58:25.433053 | orchestrator | 2026-04-01 00:58:25 | INFO  | Task 3ae40d44-34a5-4253-b94f-2fde97cfc8a2 is in state STARTED 2026-04-01 00:58:25.433155 | orchestrator | 2026-04-01 00:58:25 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:58:28.474090 | orchestrator | 2026-04-01 00:58:28 | INFO  | Task fe468b18-05e1-427a-b535-2a1b15d29d0f is in state STARTED 2026-04-01 00:58:28.477761 | orchestrator | 2026-04-01 00:58:28 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:58:28.483286 | orchestrator | 2026-04-01 00:58:28 | INFO  | Task 9e5e9e52-7b68-404a-861c-60066871749f is in state STARTED 2026-04-01 00:58:28.488892 | orchestrator | 2026-04-01 00:58:28 | INFO  | Task 7be7858d-702f-479f-8c4d-49efcca9b9ee is in state STARTED 2026-04-01 00:58:28.491241 | orchestrator | 2026-04-01 00:58:28 | INFO  | Task 3ae40d44-34a5-4253-b94f-2fde97cfc8a2 is in state STARTED 2026-04-01 00:58:28.492394 | orchestrator | 2026-04-01 00:58:28 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:58:31.541985 | orchestrator | 2026-04-01 00:58:31 | INFO  | Task fe468b18-05e1-427a-b535-2a1b15d29d0f is in state STARTED 2026-04-01 00:58:31.543407 | orchestrator | 2026-04-01 00:58:31 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:58:31.545232 | orchestrator | 2026-04-01 00:58:31 | INFO  | Task 9e5e9e52-7b68-404a-861c-60066871749f is in state STARTED 2026-04-01 00:58:31.546799 | orchestrator | 2026-04-01 00:58:31 | INFO  | Task 7be7858d-702f-479f-8c4d-49efcca9b9ee is in state STARTED 2026-04-01 00:58:31.548114 | orchestrator | 2026-04-01 00:58:31 | INFO  | Task 3ae40d44-34a5-4253-b94f-2fde97cfc8a2 is in state STARTED 2026-04-01 00:58:31.548155 | orchestrator | 2026-04-01 00:58:31 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:58:34.598071 | orchestrator | 2026-04-01 00:58:34 | INFO  | Task fe468b18-05e1-427a-b535-2a1b15d29d0f is in state STARTED 2026-04-01 00:58:34.600032 | orchestrator | 2026-04-01 00:58:34 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:58:34.602836 | orchestrator | 2026-04-01 00:58:34 | INFO  | Task 9e5e9e52-7b68-404a-861c-60066871749f is in state STARTED 2026-04-01 00:58:34.604355 | orchestrator | 2026-04-01 00:58:34 | INFO  | Task 7be7858d-702f-479f-8c4d-49efcca9b9ee is in state STARTED 2026-04-01 00:58:34.606837 | orchestrator | 2026-04-01 00:58:34 | INFO  | Task 3ae40d44-34a5-4253-b94f-2fde97cfc8a2 is in state STARTED 2026-04-01 00:58:34.606873 | orchestrator | 2026-04-01 00:58:34 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:58:37.661179 | orchestrator | 2026-04-01 00:58:37 | INFO  | Task fe468b18-05e1-427a-b535-2a1b15d29d0f is in state STARTED 2026-04-01 00:58:37.663321 | orchestrator | 2026-04-01 00:58:37 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:58:37.668133 | orchestrator | 2026-04-01 00:58:37 | INFO  | Task 9e5e9e52-7b68-404a-861c-60066871749f is in state STARTED 2026-04-01 00:58:37.671798 | orchestrator | 2026-04-01 00:58:37 | INFO  | Task 7be7858d-702f-479f-8c4d-49efcca9b9ee is in state STARTED 2026-04-01 00:58:37.674763 | orchestrator | 2026-04-01 00:58:37 | INFO  | Task 3ae40d44-34a5-4253-b94f-2fde97cfc8a2 is in state STARTED 2026-04-01 00:58:37.674848 | orchestrator | 2026-04-01 00:58:37 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:58:40.722392 | orchestrator | 2026-04-01 00:58:40 | INFO  | Task fe468b18-05e1-427a-b535-2a1b15d29d0f is in state STARTED 2026-04-01 00:58:40.723962 | orchestrator | 2026-04-01 00:58:40 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:58:40.725618 | orchestrator | 2026-04-01 00:58:40 | INFO  | Task 9e5e9e52-7b68-404a-861c-60066871749f is in state STARTED 2026-04-01 00:58:40.727391 | orchestrator | 2026-04-01 00:58:40 | INFO  | Task 7be7858d-702f-479f-8c4d-49efcca9b9ee is in state STARTED 2026-04-01 00:58:40.729136 | orchestrator | 2026-04-01 00:58:40 | INFO  | Task 3ae40d44-34a5-4253-b94f-2fde97cfc8a2 is in state STARTED 2026-04-01 00:58:40.729188 | orchestrator | 2026-04-01 00:58:40 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:58:43.785715 | orchestrator | 2026-04-01 00:58:43 | INFO  | Task fe468b18-05e1-427a-b535-2a1b15d29d0f is in state STARTED 2026-04-01 00:58:43.786289 | orchestrator | 2026-04-01 00:58:43 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:58:43.788841 | orchestrator | 2026-04-01 00:58:43 | INFO  | Task 9e5e9e52-7b68-404a-861c-60066871749f is in state STARTED 2026-04-01 00:58:43.791154 | orchestrator | 2026-04-01 00:58:43 | INFO  | Task 7be7858d-702f-479f-8c4d-49efcca9b9ee is in state STARTED 2026-04-01 00:58:43.793105 | orchestrator | 2026-04-01 00:58:43 | INFO  | Task 3ae40d44-34a5-4253-b94f-2fde97cfc8a2 is in state STARTED 2026-04-01 00:58:43.793203 | orchestrator | 2026-04-01 00:58:43 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:58:46.832393 | orchestrator | 2026-04-01 00:58:46 | INFO  | Task fe468b18-05e1-427a-b535-2a1b15d29d0f is in state SUCCESS 2026-04-01 00:58:46.835227 | orchestrator | 2026-04-01 00:58:46 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:58:46.838540 | orchestrator | 2026-04-01 00:58:46 | INFO  | Task 9e5e9e52-7b68-404a-861c-60066871749f is in state STARTED 2026-04-01 00:58:46.839966 | orchestrator | 2026-04-01 00:58:46 | INFO  | Task 7be7858d-702f-479f-8c4d-49efcca9b9ee is in state SUCCESS 2026-04-01 00:58:46.844180 | orchestrator | 2026-04-01 00:58:46 | INFO  | Task 41c18ccd-98dc-4ede-9e51-19c306b8a389 is in state STARTED 2026-04-01 00:58:46.847807 | orchestrator | 2026-04-01 00:58:46 | INFO  | Task 3ae40d44-34a5-4253-b94f-2fde97cfc8a2 is in state STARTED 2026-04-01 00:58:46.850180 | orchestrator | 2026-04-01 00:58:46 | INFO  | Task 16103425-ac86-4ca5-8ac6-3cfeaa87c7ed is in state STARTED 2026-04-01 00:58:46.850536 | orchestrator | 2026-04-01 00:58:46 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:58:49.897356 | orchestrator | 2026-04-01 00:58:49 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:58:49.900477 | orchestrator | 2026-04-01 00:58:49 | INFO  | Task 9e5e9e52-7b68-404a-861c-60066871749f is in state SUCCESS 2026-04-01 00:58:49.902046 | orchestrator | 2026-04-01 00:58:49.902129 | orchestrator | 2026-04-01 00:58:49.902149 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-01 00:58:49.902166 | orchestrator | 2026-04-01 00:58:49.902179 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-01 00:58:49.902190 | orchestrator | Wednesday 01 April 2026 00:57:50 +0000 (0:00:00.460) 0:00:00.460 ******* 2026-04-01 00:58:49.902204 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:58:49.902213 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:58:49.902221 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:58:49.902229 | orchestrator | 2026-04-01 00:58:49.902237 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-01 00:58:49.902245 | orchestrator | Wednesday 01 April 2026 00:57:50 +0000 (0:00:00.295) 0:00:00.756 ******* 2026-04-01 00:58:49.902253 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-04-01 00:58:49.902261 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-04-01 00:58:49.902269 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-04-01 00:58:49.902277 | orchestrator | 2026-04-01 00:58:49.902286 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-04-01 00:58:49.902294 | orchestrator | 2026-04-01 00:58:49.902302 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-04-01 00:58:49.902310 | orchestrator | Wednesday 01 April 2026 00:57:51 +0000 (0:00:00.328) 0:00:01.084 ******* 2026-04-01 00:58:49.902317 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:58:49.902327 | orchestrator | 2026-04-01 00:58:49.902335 | orchestrator | TASK [service-ks-register : designate | Creating/deleting services] ************ 2026-04-01 00:58:49.902344 | orchestrator | Wednesday 01 April 2026 00:57:51 +0000 (0:00:00.489) 0:00:01.574 ******* 2026-04-01 00:58:49.902352 | orchestrator | FAILED - RETRYING: [testbed-node-0]: designate | Creating/deleting services (5 retries left). 2026-04-01 00:58:49.902361 | orchestrator | FAILED - RETRYING: [testbed-node-0]: designate | Creating/deleting services (4 retries left). 2026-04-01 00:58:49.902369 | orchestrator | FAILED - RETRYING: [testbed-node-0]: designate | Creating/deleting services (3 retries left). 2026-04-01 00:58:49.902377 | orchestrator | FAILED - RETRYING: [testbed-node-0]: designate | Creating/deleting services (2 retries left). 2026-04-01 00:58:49.902384 | orchestrator | FAILED - RETRYING: [testbed-node-0]: designate | Creating/deleting services (1 retries left). 2026-04-01 00:58:49.902421 | orchestrator | failed: [testbed-node-0] (item=designate (dns)) => {"ansible_loop_var": "item", "attempts": 5, "changed": false, "item": {"description": "Designate DNS Service", "endpoints": [{"interface": "internal", "url": "https://api-int.testbed.osism.xyz:9001"}, {"interface": "public", "url": "https://api.testbed.osism.xyz:9001"}], "name": "designate", "type": "dns"}, "msg": "kolla_toolbox container is missing or not running!"} 2026-04-01 00:58:49.902431 | orchestrator | 2026-04-01 00:58:49.902439 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 00:58:49.902446 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2026-04-01 00:58:49.902455 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 00:58:49.902464 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 00:58:49.902471 | orchestrator | 2026-04-01 00:58:49.902479 | orchestrator | 2026-04-01 00:58:49.902487 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 00:58:49.902495 | orchestrator | Wednesday 01 April 2026 00:58:44 +0000 (0:00:52.984) 0:00:54.558 ******* 2026-04-01 00:58:49.902502 | orchestrator | =============================================================================== 2026-04-01 00:58:49.902509 | orchestrator | service-ks-register : designate | Creating/deleting services ----------- 52.99s 2026-04-01 00:58:49.902517 | orchestrator | designate : include_tasks ----------------------------------------------- 0.49s 2026-04-01 00:58:49.902525 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.33s 2026-04-01 00:58:49.902533 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.30s 2026-04-01 00:58:49.902541 | orchestrator | 2026-04-01 00:58:49.902548 | orchestrator | 2026-04-01 00:58:49.902556 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-01 00:58:49.902564 | orchestrator | 2026-04-01 00:58:49.902641 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-01 00:58:49.902651 | orchestrator | Wednesday 01 April 2026 00:57:50 +0000 (0:00:00.385) 0:00:00.385 ******* 2026-04-01 00:58:49.902671 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:58:49.902679 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:58:49.902686 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:58:49.902694 | orchestrator | 2026-04-01 00:58:49.902702 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-01 00:58:49.902710 | orchestrator | Wednesday 01 April 2026 00:57:50 +0000 (0:00:00.274) 0:00:00.659 ******* 2026-04-01 00:58:49.902717 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-04-01 00:58:49.902726 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-04-01 00:58:49.902730 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-04-01 00:58:49.902735 | orchestrator | 2026-04-01 00:58:49.902740 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-04-01 00:58:49.902745 | orchestrator | 2026-04-01 00:58:49.902763 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-04-01 00:58:49.902768 | orchestrator | Wednesday 01 April 2026 00:57:51 +0000 (0:00:00.290) 0:00:00.950 ******* 2026-04-01 00:58:49.902773 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:58:49.902778 | orchestrator | 2026-04-01 00:58:49.902782 | orchestrator | TASK [service-ks-register : barbican | Creating/deleting services] ************* 2026-04-01 00:58:49.902787 | orchestrator | Wednesday 01 April 2026 00:57:51 +0000 (0:00:00.544) 0:00:01.494 ******* 2026-04-01 00:58:49.902792 | orchestrator | FAILED - RETRYING: [testbed-node-0]: barbican | Creating/deleting services (5 retries left). 2026-04-01 00:58:49.902796 | orchestrator | FAILED - RETRYING: [testbed-node-0]: barbican | Creating/deleting services (4 retries left). 2026-04-01 00:58:49.902812 | orchestrator | FAILED - RETRYING: [testbed-node-0]: barbican | Creating/deleting services (3 retries left). 2026-04-01 00:58:49.902817 | orchestrator | FAILED - RETRYING: [testbed-node-0]: barbican | Creating/deleting services (2 retries left). 2026-04-01 00:58:49.902822 | orchestrator | FAILED - RETRYING: [testbed-node-0]: barbican | Creating/deleting services (1 retries left). 2026-04-01 00:58:49.902827 | orchestrator | failed: [testbed-node-0] (item=barbican (key-manager)) => {"ansible_loop_var": "item", "attempts": 5, "changed": false, "item": {"description": "Barbican Key Management Service", "endpoints": [{"interface": "internal", "url": "https://api-int.testbed.osism.xyz:9311"}, {"interface": "public", "url": "https://api.testbed.osism.xyz:9311"}], "name": "barbican", "type": "key-manager"}, "msg": "kolla_toolbox container is missing or not running!"} 2026-04-01 00:58:49.902835 | orchestrator | 2026-04-01 00:58:49.902839 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 00:58:49.902844 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2026-04-01 00:58:49.902849 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 00:58:49.902854 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 00:58:49.902859 | orchestrator | 2026-04-01 00:58:49.902863 | orchestrator | 2026-04-01 00:58:49.902868 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 00:58:49.902873 | orchestrator | Wednesday 01 April 2026 00:58:44 +0000 (0:00:52.985) 0:00:54.480 ******* 2026-04-01 00:58:49.902877 | orchestrator | =============================================================================== 2026-04-01 00:58:49.902882 | orchestrator | service-ks-register : barbican | Creating/deleting services ------------ 52.99s 2026-04-01 00:58:49.902886 | orchestrator | barbican : include_tasks ------------------------------------------------ 0.54s 2026-04-01 00:58:49.902891 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.29s 2026-04-01 00:58:49.902895 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.27s 2026-04-01 00:58:49.902900 | orchestrator | 2026-04-01 00:58:49.902905 | orchestrator | 2026-04-01 00:58:49.902909 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-01 00:58:49.902914 | orchestrator | 2026-04-01 00:58:49.902919 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-01 00:58:49.902923 | orchestrator | Wednesday 01 April 2026 00:57:50 +0000 (0:00:00.295) 0:00:00.295 ******* 2026-04-01 00:58:49.902928 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:58:49.902932 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:58:49.902937 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:58:49.902942 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:58:49.902946 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:58:49.902951 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:58:49.902958 | orchestrator | 2026-04-01 00:58:49.902966 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-01 00:58:49.902973 | orchestrator | Wednesday 01 April 2026 00:57:51 +0000 (0:00:00.458) 0:00:00.754 ******* 2026-04-01 00:58:49.902980 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-04-01 00:58:49.902987 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-04-01 00:58:49.902995 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-04-01 00:58:49.903002 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-04-01 00:58:49.903009 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-04-01 00:58:49.903016 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-04-01 00:58:49.903024 | orchestrator | 2026-04-01 00:58:49.903032 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-04-01 00:58:49.903045 | orchestrator | 2026-04-01 00:58:49.903058 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-01 00:58:49.903067 | orchestrator | Wednesday 01 April 2026 00:57:51 +0000 (0:00:00.500) 0:00:01.254 ******* 2026-04-01 00:58:49.903072 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:58:49.903077 | orchestrator | 2026-04-01 00:58:49.903085 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-04-01 00:58:49.903092 | orchestrator | Wednesday 01 April 2026 00:57:52 +0000 (0:00:00.969) 0:00:02.224 ******* 2026-04-01 00:58:49.903099 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:58:49.903106 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:58:49.903114 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:58:49.903121 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:58:49.903144 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:58:49.903152 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:58:49.903159 | orchestrator | 2026-04-01 00:58:49.903166 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-04-01 00:58:49.903174 | orchestrator | Wednesday 01 April 2026 00:57:54 +0000 (0:00:01.370) 0:00:03.594 ******* 2026-04-01 00:58:49.903181 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:58:49.903189 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:58:49.903197 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:58:49.903205 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:58:49.903212 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:58:49.903221 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:58:49.903227 | orchestrator | 2026-04-01 00:58:49.903232 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-04-01 00:58:49.903237 | orchestrator | Wednesday 01 April 2026 00:57:55 +0000 (0:00:01.201) 0:00:04.796 ******* 2026-04-01 00:58:49.903241 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:58:49.903246 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:58:49.903251 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:58:49.903255 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:58:49.903260 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:58:49.903265 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:58:49.903270 | orchestrator | 2026-04-01 00:58:49.903277 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-04-01 00:58:49.903284 | orchestrator | Wednesday 01 April 2026 00:57:55 +0000 (0:00:00.439) 0:00:05.235 ******* 2026-04-01 00:58:49.903289 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:58:49.903294 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:58:49.903298 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:58:49.903303 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:58:49.903308 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:58:49.903312 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:58:49.903317 | orchestrator | 2026-04-01 00:58:49.903322 | orchestrator | TASK [service-ks-register : neutron | Creating/deleting services] ************** 2026-04-01 00:58:49.903326 | orchestrator | Wednesday 01 April 2026 00:57:56 +0000 (0:00:00.587) 0:00:05.823 ******* 2026-04-01 00:58:49.903331 | orchestrator | FAILED - RETRYING: [testbed-node-0]: neutron | Creating/deleting services (5 retries left). 2026-04-01 00:58:49.903336 | orchestrator | FAILED - RETRYING: [testbed-node-0]: neutron | Creating/deleting services (4 retries left). 2026-04-01 00:58:49.903341 | orchestrator | FAILED - RETRYING: [testbed-node-0]: neutron | Creating/deleting services (3 retries left). 2026-04-01 00:58:49.903345 | orchestrator | FAILED - RETRYING: [testbed-node-0]: neutron | Creating/deleting services (2 retries left). 2026-04-01 00:58:49.903350 | orchestrator | FAILED - RETRYING: [testbed-node-0]: neutron | Creating/deleting services (1 retries left). 2026-04-01 00:58:49.903356 | orchestrator | failed: [testbed-node-0] (item=neutron (network)) => {"ansible_loop_var": "item", "attempts": 5, "changed": false, "item": {"description": "Openstack Networking", "endpoints": [{"interface": "internal", "url": "https://api-int.testbed.osism.xyz:9696"}, {"interface": "public", "url": "https://api.testbed.osism.xyz:9696"}], "name": "neutron", "type": "network"}, "msg": "kolla_toolbox container is missing or not running!"} 2026-04-01 00:58:49.903367 | orchestrator | 2026-04-01 00:58:49.903372 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 00:58:49.903377 | orchestrator | testbed-node-0 : ok=5  changed=0 unreachable=0 failed=1  skipped=2  rescued=0 ignored=0 2026-04-01 00:58:49.903382 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-01 00:58:49.903387 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-01 00:58:49.903391 | orchestrator | testbed-node-3 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-01 00:58:49.903396 | orchestrator | testbed-node-4 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-01 00:58:49.903401 | orchestrator | testbed-node-5 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-01 00:58:49.903405 | orchestrator | 2026-04-01 00:58:49.903410 | orchestrator | 2026-04-01 00:58:49.903415 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 00:58:49.903419 | orchestrator | Wednesday 01 April 2026 00:58:49 +0000 (0:00:52.737) 0:00:58.561 ******* 2026-04-01 00:58:49.903432 | orchestrator | =============================================================================== 2026-04-01 00:58:49.903439 | orchestrator | service-ks-register : neutron | Creating/deleting services ------------- 52.74s 2026-04-01 00:58:49.903446 | orchestrator | neutron : Get container facts ------------------------------------------- 1.37s 2026-04-01 00:58:49.903453 | orchestrator | neutron : Get container volume facts ------------------------------------ 1.20s 2026-04-01 00:58:49.903460 | orchestrator | neutron : include_tasks ------------------------------------------------- 0.97s 2026-04-01 00:58:49.903468 | orchestrator | neutron : Check for ML2/OVS presence ------------------------------------ 0.59s 2026-04-01 00:58:49.903474 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.50s 2026-04-01 00:58:49.903487 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.46s 2026-04-01 00:58:49.903495 | orchestrator | neutron : Check for ML2/OVN presence ------------------------------------ 0.44s 2026-04-01 00:58:49.905638 | orchestrator | 2026-04-01 00:58:49 | INFO  | Task 41c18ccd-98dc-4ede-9e51-19c306b8a389 is in state STARTED 2026-04-01 00:58:49.907707 | orchestrator | 2026-04-01 00:58:49 | INFO  | Task 3ae40d44-34a5-4253-b94f-2fde97cfc8a2 is in state STARTED 2026-04-01 00:58:49.909869 | orchestrator | 2026-04-01 00:58:49 | INFO  | Task 16103425-ac86-4ca5-8ac6-3cfeaa87c7ed is in state STARTED 2026-04-01 00:58:49.910066 | orchestrator | 2026-04-01 00:58:49 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:58:52.971830 | orchestrator | 2026-04-01 00:58:52 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 00:58:52.976184 | orchestrator | 2026-04-01 00:58:52 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:58:52.979169 | orchestrator | 2026-04-01 00:58:52 | INFO  | Task 41c18ccd-98dc-4ede-9e51-19c306b8a389 is in state STARTED 2026-04-01 00:58:52.980933 | orchestrator | 2026-04-01 00:58:52 | INFO  | Task 3ae40d44-34a5-4253-b94f-2fde97cfc8a2 is in state STARTED 2026-04-01 00:58:52.982815 | orchestrator | 2026-04-01 00:58:52 | INFO  | Task 16103425-ac86-4ca5-8ac6-3cfeaa87c7ed is in state STARTED 2026-04-01 00:58:52.983280 | orchestrator | 2026-04-01 00:58:52 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:58:56.035453 | orchestrator | 2026-04-01 00:58:56 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 00:58:56.036578 | orchestrator | 2026-04-01 00:58:56 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:58:56.039401 | orchestrator | 2026-04-01 00:58:56 | INFO  | Task 41c18ccd-98dc-4ede-9e51-19c306b8a389 is in state STARTED 2026-04-01 00:58:56.039972 | orchestrator | 2026-04-01 00:58:56 | INFO  | Task 3ae40d44-34a5-4253-b94f-2fde97cfc8a2 is in state STARTED 2026-04-01 00:58:56.041785 | orchestrator | 2026-04-01 00:58:56 | INFO  | Task 16103425-ac86-4ca5-8ac6-3cfeaa87c7ed is in state STARTED 2026-04-01 00:58:56.041890 | orchestrator | 2026-04-01 00:58:56 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:58:59.086511 | orchestrator | 2026-04-01 00:58:59 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 00:58:59.088407 | orchestrator | 2026-04-01 00:58:59 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:58:59.090228 | orchestrator | 2026-04-01 00:58:59 | INFO  | Task 41c18ccd-98dc-4ede-9e51-19c306b8a389 is in state STARTED 2026-04-01 00:58:59.091900 | orchestrator | 2026-04-01 00:58:59 | INFO  | Task 3ae40d44-34a5-4253-b94f-2fde97cfc8a2 is in state STARTED 2026-04-01 00:58:59.093463 | orchestrator | 2026-04-01 00:58:59 | INFO  | Task 16103425-ac86-4ca5-8ac6-3cfeaa87c7ed is in state STARTED 2026-04-01 00:58:59.093502 | orchestrator | 2026-04-01 00:58:59 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:59:02.141265 | orchestrator | 2026-04-01 00:59:02 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 00:59:02.142199 | orchestrator | 2026-04-01 00:59:02 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:59:02.143671 | orchestrator | 2026-04-01 00:59:02 | INFO  | Task 41c18ccd-98dc-4ede-9e51-19c306b8a389 is in state STARTED 2026-04-01 00:59:02.145122 | orchestrator | 2026-04-01 00:59:02 | INFO  | Task 3ae40d44-34a5-4253-b94f-2fde97cfc8a2 is in state STARTED 2026-04-01 00:59:02.147607 | orchestrator | 2026-04-01 00:59:02 | INFO  | Task 16103425-ac86-4ca5-8ac6-3cfeaa87c7ed is in state STARTED 2026-04-01 00:59:02.147785 | orchestrator | 2026-04-01 00:59:02 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:59:05.190225 | orchestrator | 2026-04-01 00:59:05 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 00:59:05.191165 | orchestrator | 2026-04-01 00:59:05 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:59:05.193125 | orchestrator | 2026-04-01 00:59:05 | INFO  | Task 41c18ccd-98dc-4ede-9e51-19c306b8a389 is in state STARTED 2026-04-01 00:59:05.194177 | orchestrator | 2026-04-01 00:59:05 | INFO  | Task 3ae40d44-34a5-4253-b94f-2fde97cfc8a2 is in state STARTED 2026-04-01 00:59:05.195226 | orchestrator | 2026-04-01 00:59:05 | INFO  | Task 16103425-ac86-4ca5-8ac6-3cfeaa87c7ed is in state STARTED 2026-04-01 00:59:05.195257 | orchestrator | 2026-04-01 00:59:05 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:59:08.245239 | orchestrator | 2026-04-01 00:59:08 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 00:59:08.247200 | orchestrator | 2026-04-01 00:59:08 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:59:08.251097 | orchestrator | 2026-04-01 00:59:08 | INFO  | Task 41c18ccd-98dc-4ede-9e51-19c306b8a389 is in state STARTED 2026-04-01 00:59:08.252701 | orchestrator | 2026-04-01 00:59:08 | INFO  | Task 3ae40d44-34a5-4253-b94f-2fde97cfc8a2 is in state STARTED 2026-04-01 00:59:08.254306 | orchestrator | 2026-04-01 00:59:08 | INFO  | Task 16103425-ac86-4ca5-8ac6-3cfeaa87c7ed is in state STARTED 2026-04-01 00:59:08.254343 | orchestrator | 2026-04-01 00:59:08 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:59:11.296310 | orchestrator | 2026-04-01 00:59:11 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 00:59:11.298516 | orchestrator | 2026-04-01 00:59:11 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:59:11.300376 | orchestrator | 2026-04-01 00:59:11 | INFO  | Task 41c18ccd-98dc-4ede-9e51-19c306b8a389 is in state STARTED 2026-04-01 00:59:11.304259 | orchestrator | 2026-04-01 00:59:11 | INFO  | Task 3ae40d44-34a5-4253-b94f-2fde97cfc8a2 is in state STARTED 2026-04-01 00:59:11.304310 | orchestrator | 2026-04-01 00:59:11 | INFO  | Task 16103425-ac86-4ca5-8ac6-3cfeaa87c7ed is in state STARTED 2026-04-01 00:59:11.304319 | orchestrator | 2026-04-01 00:59:11 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:59:14.342870 | orchestrator | 2026-04-01 00:59:14 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 00:59:14.344580 | orchestrator | 2026-04-01 00:59:14 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:59:14.346814 | orchestrator | 2026-04-01 00:59:14 | INFO  | Task 41c18ccd-98dc-4ede-9e51-19c306b8a389 is in state STARTED 2026-04-01 00:59:14.349306 | orchestrator | 2026-04-01 00:59:14 | INFO  | Task 3ae40d44-34a5-4253-b94f-2fde97cfc8a2 is in state STARTED 2026-04-01 00:59:14.352230 | orchestrator | 2026-04-01 00:59:14 | INFO  | Task 16103425-ac86-4ca5-8ac6-3cfeaa87c7ed is in state STARTED 2026-04-01 00:59:14.352853 | orchestrator | 2026-04-01 00:59:14 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:59:17.394899 | orchestrator | 2026-04-01 00:59:17 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 00:59:17.396651 | orchestrator | 2026-04-01 00:59:17 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:59:17.398655 | orchestrator | 2026-04-01 00:59:17 | INFO  | Task 41c18ccd-98dc-4ede-9e51-19c306b8a389 is in state STARTED 2026-04-01 00:59:17.400543 | orchestrator | 2026-04-01 00:59:17 | INFO  | Task 3ae40d44-34a5-4253-b94f-2fde97cfc8a2 is in state STARTED 2026-04-01 00:59:17.402404 | orchestrator | 2026-04-01 00:59:17 | INFO  | Task 16103425-ac86-4ca5-8ac6-3cfeaa87c7ed is in state STARTED 2026-04-01 00:59:17.402670 | orchestrator | 2026-04-01 00:59:17 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:59:20.447262 | orchestrator | 2026-04-01 00:59:20 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 00:59:20.447987 | orchestrator | 2026-04-01 00:59:20 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state STARTED 2026-04-01 00:59:20.451021 | orchestrator | 2026-04-01 00:59:20 | INFO  | Task 41c18ccd-98dc-4ede-9e51-19c306b8a389 is in state STARTED 2026-04-01 00:59:20.454155 | orchestrator | 2026-04-01 00:59:20 | INFO  | Task 3ae40d44-34a5-4253-b94f-2fde97cfc8a2 is in state STARTED 2026-04-01 00:59:20.458316 | orchestrator | 2026-04-01 00:59:20 | INFO  | Task 16103425-ac86-4ca5-8ac6-3cfeaa87c7ed is in state STARTED 2026-04-01 00:59:20.458401 | orchestrator | 2026-04-01 00:59:20 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:59:23.508585 | orchestrator | 2026-04-01 00:59:23 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 00:59:23.515279 | orchestrator | 2026-04-01 00:59:23.515377 | orchestrator | 2026-04-01 00:59:23 | INFO  | Task b96ebdf0-75f1-4452-838a-8e05e59ac73d is in state SUCCESS 2026-04-01 00:59:23.516862 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-01 00:59:23.516924 | orchestrator | 2.16.14 2026-04-01 00:59:23.516931 | orchestrator | 2026-04-01 00:59:23.516936 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-04-01 00:59:23.516941 | orchestrator | 2026-04-01 00:59:23.516945 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-01 00:59:23.516950 | orchestrator | Wednesday 01 April 2026 00:48:53 +0000 (0:00:00.847) 0:00:00.847 ******* 2026-04-01 00:59:23.516955 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:59:23.516960 | orchestrator | 2026-04-01 00:59:23.516964 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-01 00:59:23.516968 | orchestrator | Wednesday 01 April 2026 00:48:55 +0000 (0:00:01.583) 0:00:02.431 ******* 2026-04-01 00:59:23.516972 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:59:23.516976 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:59:23.516980 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:59:23.516984 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:59:23.516988 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:59:23.516992 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:59:23.516996 | orchestrator | 2026-04-01 00:59:23.517000 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-01 00:59:23.517004 | orchestrator | Wednesday 01 April 2026 00:48:56 +0000 (0:00:01.639) 0:00:04.071 ******* 2026-04-01 00:59:23.517007 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:59:23.517011 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:59:23.517015 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:59:23.517019 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:59:23.517022 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:59:23.517026 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:59:23.517030 | orchestrator | 2026-04-01 00:59:23.517034 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-01 00:59:23.517038 | orchestrator | Wednesday 01 April 2026 00:48:57 +0000 (0:00:00.734) 0:00:04.805 ******* 2026-04-01 00:59:23.517042 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:59:23.517046 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:59:23.517050 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:59:23.517053 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:59:23.517057 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:59:23.517061 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:59:23.517065 | orchestrator | 2026-04-01 00:59:23.517068 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-01 00:59:23.517072 | orchestrator | Wednesday 01 April 2026 00:48:58 +0000 (0:00:01.002) 0:00:05.807 ******* 2026-04-01 00:59:23.517076 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:59:23.517082 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:59:23.517089 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:59:23.517094 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:59:23.517100 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:59:23.517105 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:59:23.517111 | orchestrator | 2026-04-01 00:59:23.517116 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-01 00:59:23.517121 | orchestrator | Wednesday 01 April 2026 00:48:59 +0000 (0:00:01.144) 0:00:06.952 ******* 2026-04-01 00:59:23.517127 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:59:23.517133 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:59:23.517138 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:59:23.517827 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:59:23.517839 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:59:23.517845 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:59:23.517850 | orchestrator | 2026-04-01 00:59:23.517858 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-01 00:59:23.517942 | orchestrator | Wednesday 01 April 2026 00:49:00 +0000 (0:00:00.880) 0:00:07.832 ******* 2026-04-01 00:59:23.517947 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:59:23.517951 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:59:23.517955 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:59:23.517959 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:59:23.517963 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:59:23.517968 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:59:23.517972 | orchestrator | 2026-04-01 00:59:23.517977 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-01 00:59:23.517981 | orchestrator | Wednesday 01 April 2026 00:49:01 +0000 (0:00:01.104) 0:00:08.937 ******* 2026-04-01 00:59:23.517986 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.517990 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.517994 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.517998 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.518002 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.518006 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.518010 | orchestrator | 2026-04-01 00:59:23.518047 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-01 00:59:23.518051 | orchestrator | Wednesday 01 April 2026 00:49:02 +0000 (0:00:00.840) 0:00:09.777 ******* 2026-04-01 00:59:23.518055 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:59:23.518059 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:59:23.518063 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:59:23.518067 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:59:23.518071 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:59:23.518075 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:59:23.518079 | orchestrator | 2026-04-01 00:59:23.518083 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-01 00:59:23.518087 | orchestrator | Wednesday 01 April 2026 00:49:03 +0000 (0:00:00.610) 0:00:10.387 ******* 2026-04-01 00:59:23.518091 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-01 00:59:23.518095 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-01 00:59:23.518099 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-01 00:59:23.518103 | orchestrator | 2026-04-01 00:59:23.518107 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-01 00:59:23.518117 | orchestrator | Wednesday 01 April 2026 00:49:03 +0000 (0:00:00.589) 0:00:10.977 ******* 2026-04-01 00:59:23.518123 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:59:23.518129 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:59:23.518135 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:59:23.518163 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:59:23.518167 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:59:23.518171 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:59:23.518175 | orchestrator | 2026-04-01 00:59:23.518179 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-01 00:59:23.518183 | orchestrator | Wednesday 01 April 2026 00:49:05 +0000 (0:00:01.916) 0:00:12.894 ******* 2026-04-01 00:59:23.518187 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-01 00:59:23.518190 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-01 00:59:23.518194 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-01 00:59:23.518198 | orchestrator | 2026-04-01 00:59:23.518202 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-01 00:59:23.518206 | orchestrator | Wednesday 01 April 2026 00:49:07 +0000 (0:00:02.361) 0:00:15.255 ******* 2026-04-01 00:59:23.518210 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-01 00:59:23.518214 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-01 00:59:23.518217 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-01 00:59:23.518227 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.518231 | orchestrator | 2026-04-01 00:59:23.518235 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-01 00:59:23.518239 | orchestrator | Wednesday 01 April 2026 00:49:09 +0000 (0:00:01.125) 0:00:16.380 ******* 2026-04-01 00:59:23.518245 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-01 00:59:23.518251 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-01 00:59:23.518256 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-01 00:59:23.518260 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.518263 | orchestrator | 2026-04-01 00:59:23.518267 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-01 00:59:23.518271 | orchestrator | Wednesday 01 April 2026 00:49:10 +0000 (0:00:01.771) 0:00:18.152 ******* 2026-04-01 00:59:23.518276 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-01 00:59:23.518283 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-01 00:59:23.518287 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-01 00:59:23.518291 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.518295 | orchestrator | 2026-04-01 00:59:23.518298 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-01 00:59:23.518302 | orchestrator | Wednesday 01 April 2026 00:49:11 +0000 (0:00:00.530) 0:00:18.683 ******* 2026-04-01 00:59:23.518323 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-01 00:49:06.070434', 'end': '2026-04-01 00:49:06.160744', 'delta': '0:00:00.090310', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-01 00:59:23.518331 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-01 00:49:06.712148', 'end': '2026-04-01 00:49:06.800800', 'delta': '0:00:00.088652', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-01 00:59:23.518338 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-01 00:49:07.428700', 'end': '2026-04-01 00:49:07.526191', 'delta': '0:00:00.097491', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-01 00:59:23.518343 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.518347 | orchestrator | 2026-04-01 00:59:23.518350 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-01 00:59:23.518354 | orchestrator | Wednesday 01 April 2026 00:49:11 +0000 (0:00:00.502) 0:00:19.185 ******* 2026-04-01 00:59:23.518704 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:59:23.518710 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:59:23.518714 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:59:23.518718 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:59:23.518722 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:59:23.518726 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:59:23.518729 | orchestrator | 2026-04-01 00:59:23.518766 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-01 00:59:23.518773 | orchestrator | Wednesday 01 April 2026 00:49:14 +0000 (0:00:02.209) 0:00:21.394 ******* 2026-04-01 00:59:23.518779 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-01 00:59:23.518785 | orchestrator | 2026-04-01 00:59:23.518791 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-01 00:59:23.518796 | orchestrator | Wednesday 01 April 2026 00:49:14 +0000 (0:00:00.734) 0:00:22.129 ******* 2026-04-01 00:59:23.518803 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.518809 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.518815 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.518822 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.518828 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.518835 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.518839 | orchestrator | 2026-04-01 00:59:23.518843 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-01 00:59:23.518847 | orchestrator | Wednesday 01 April 2026 00:49:16 +0000 (0:00:01.333) 0:00:23.463 ******* 2026-04-01 00:59:23.518851 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.518855 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.518859 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.518862 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.518866 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.518870 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.518874 | orchestrator | 2026-04-01 00:59:23.518878 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-01 00:59:23.518882 | orchestrator | Wednesday 01 April 2026 00:49:17 +0000 (0:00:01.492) 0:00:24.955 ******* 2026-04-01 00:59:23.518886 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.518890 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.518894 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.518905 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.518908 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.518912 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.518916 | orchestrator | 2026-04-01 00:59:23.518920 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-01 00:59:23.518924 | orchestrator | Wednesday 01 April 2026 00:49:18 +0000 (0:00:00.726) 0:00:25.681 ******* 2026-04-01 00:59:23.518928 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.518932 | orchestrator | 2026-04-01 00:59:23.518936 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-01 00:59:23.518939 | orchestrator | Wednesday 01 April 2026 00:49:18 +0000 (0:00:00.247) 0:00:25.929 ******* 2026-04-01 00:59:23.518943 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.518947 | orchestrator | 2026-04-01 00:59:23.518951 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-01 00:59:23.518959 | orchestrator | Wednesday 01 April 2026 00:49:18 +0000 (0:00:00.244) 0:00:26.174 ******* 2026-04-01 00:59:23.518963 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.518967 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.518971 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.518993 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.518997 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.519001 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.519005 | orchestrator | 2026-04-01 00:59:23.519009 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-01 00:59:23.519013 | orchestrator | Wednesday 01 April 2026 00:49:19 +0000 (0:00:00.565) 0:00:26.739 ******* 2026-04-01 00:59:23.519017 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.519021 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.519024 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.519028 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.519032 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.519036 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.519040 | orchestrator | 2026-04-01 00:59:23.519044 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-01 00:59:23.519048 | orchestrator | Wednesday 01 April 2026 00:49:20 +0000 (0:00:00.646) 0:00:27.386 ******* 2026-04-01 00:59:23.519052 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.519056 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.519101 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.519106 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.519110 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.519114 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.519118 | orchestrator | 2026-04-01 00:59:23.519122 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-01 00:59:23.519126 | orchestrator | Wednesday 01 April 2026 00:49:20 +0000 (0:00:00.744) 0:00:28.130 ******* 2026-04-01 00:59:23.519130 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.519134 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.519138 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.519141 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.519145 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.519149 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.519153 | orchestrator | 2026-04-01 00:59:23.519157 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-01 00:59:23.519161 | orchestrator | Wednesday 01 April 2026 00:49:21 +0000 (0:00:00.968) 0:00:29.099 ******* 2026-04-01 00:59:23.519165 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.519169 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.519173 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.519177 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.519180 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.519184 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.519193 | orchestrator | 2026-04-01 00:59:23.519197 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-01 00:59:23.519201 | orchestrator | Wednesday 01 April 2026 00:49:22 +0000 (0:00:00.629) 0:00:29.728 ******* 2026-04-01 00:59:23.519205 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.519211 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.519217 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.519222 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.519228 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.519624 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.519632 | orchestrator | 2026-04-01 00:59:23.519637 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-01 00:59:23.519642 | orchestrator | Wednesday 01 April 2026 00:49:23 +0000 (0:00:01.008) 0:00:30.736 ******* 2026-04-01 00:59:23.519646 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.519650 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.519654 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.519658 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.519667 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.519673 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.519679 | orchestrator | 2026-04-01 00:59:23.519685 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-01 00:59:23.519691 | orchestrator | Wednesday 01 April 2026 00:49:24 +0000 (0:00:00.845) 0:00:31.581 ******* 2026-04-01 00:59:23.519699 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--070a6fcd--e232--5822--bdac--2856eb469583-osd--block--070a6fcd--e232--5822--bdac--2856eb469583', 'dm-uuid-LVM-XVYMn3IN00mdi6EnfVkPlw256qq9nI7912VpaCpkpbqfuvPtEYrqcEyji9q53KBz'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-01 00:59:23.519708 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--24dba708--820d--5543--af14--6cbe38251993-osd--block--24dba708--820d--5543--af14--6cbe38251993', 'dm-uuid-LVM-JQL58WVQQeGdBvo3KJNSREIYwthU36Keczsc7QaX34X6TCp6mDZGh2SdZgOENJGL'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-01 00:59:23.519808 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:59:23.519823 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:59:23.519830 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:59:23.519850 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:59:23.519854 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:59:23.519858 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:59:23.519863 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:59:23.519867 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--00bcfd13--59f0--54da--b43f--34edf6af7c7d-osd--block--00bcfd13--59f0--54da--b43f--34edf6af7c7d', 'dm-uuid-LVM-iMX0SsshsPQVLScJsBqh3Uii0sRvXBeOCIRYfrnn2E2CJid3H0gSkexslogBax5C'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-01 00:59:23.519872 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:59:23.519924 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2f8eedd5--4e35--5081--a67e--565e77fef082-osd--block--2f8eedd5--4e35--5081--a67e--565e77fef082', 'dm-uuid-LVM-AzMBHf9V42Lz4YPHKNHAEEsPuJnHRSdJoTXEpZXZJVDV0MamSFteceMneZc4yeoD'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-01 00:59:23.519934 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2a4f4914-3be5-4bda-a47c-01d9519cb486', 'scsi-SQEMU_QEMU_HARDDISK_2a4f4914-3be5-4bda-a47c-01d9519cb486'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2a4f4914-3be5-4bda-a47c-01d9519cb486-part1', 'scsi-SQEMU_QEMU_HARDDISK_2a4f4914-3be5-4bda-a47c-01d9519cb486-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2a4f4914-3be5-4bda-a47c-01d9519cb486-part14', 'scsi-SQEMU_QEMU_HARDDISK_2a4f4914-3be5-4bda-a47c-01d9519cb486-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2a4f4914-3be5-4bda-a47c-01d9519cb486-part15', 'scsi-SQEMU_QEMU_HARDDISK_2a4f4914-3be5-4bda-a47c-01d9519cb486-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2a4f4914-3be5-4bda-a47c-01d9519cb486-part16', 'scsi-SQEMU_QEMU_HARDDISK_2a4f4914-3be5-4bda-a47c-01d9519cb486-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-01 00:59:23.519944 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:59:23.519949 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--070a6fcd--e232--5822--bdac--2856eb469583-osd--block--070a6fcd--e232--5822--bdac--2856eb469583'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-j3cUEk-BjBv-qffa-yDut-NG4M-uRvZ-xxhpE2', 'scsi-0QEMU_QEMU_HARDDISK_181eb0d3-49bd-41f1-8f26-95e9754c9896', 'scsi-SQEMU_QEMU_HARDDISK_181eb0d3-49bd-41f1-8f26-95e9754c9896'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-01 00:59:23.519957 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:59:23.520049 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:59:23.520056 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:59:23.520065 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--24dba708--820d--5543--af14--6cbe38251993-osd--block--24dba708--820d--5543--af14--6cbe38251993'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-oMZbdy-hpNd-YpXd-F35t-13ZE-ubGA-klAIbY', 'scsi-0QEMU_QEMU_HARDDISK_91aabfbd-d205-4d26-bb68-6c75b4d02402', 'scsi-SQEMU_QEMU_HARDDISK_91aabfbd-d205-4d26-bb68-6c75b4d02402'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-01 00:59:23.520069 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:59:23.520074 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a922e28e-6911-40ed-8ea7-c2624142d8a1', 'scsi-SQEMU_QEMU_HARDDISK_a922e28e-6911-40ed-8ea7-c2624142d8a1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-01 00:59:23.520080 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:59:23.520084 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:59:23.520088 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:59:23.520129 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bc455e6b-b8ba-47d8-ab01-6be8b039ad3d', 'scsi-SQEMU_QEMU_HARDDISK_bc455e6b-b8ba-47d8-ab01-6be8b039ad3d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bc455e6b-b8ba-47d8-ab01-6be8b039ad3d-part1', 'scsi-SQEMU_QEMU_HARDDISK_bc455e6b-b8ba-47d8-ab01-6be8b039ad3d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bc455e6b-b8ba-47d8-ab01-6be8b039ad3d-part14', 'scsi-SQEMU_QEMU_HARDDISK_bc455e6b-b8ba-47d8-ab01-6be8b039ad3d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bc455e6b-b8ba-47d8-ab01-6be8b039ad3d-part15', 'scsi-SQEMU_QEMU_HARDDISK_bc455e6b-b8ba-47d8-ab01-6be8b039ad3d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bc455e6b-b8ba-47d8-ab01-6be8b039ad3d-part16', 'scsi-SQEMU_QEMU_HARDDISK_bc455e6b-b8ba-47d8-ab01-6be8b039ad3d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-01 00:59:23.520142 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--00bcfd13--59f0--54da--b43f--34edf6af7c7d-osd--block--00bcfd13--59f0--54da--b43f--34edf6af7c7d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-edqX2r-NIRK-P1Nk-DRh5-tSiQ-BYrO-Mo2mdM', 'scsi-0QEMU_QEMU_HARDDISK_1a9aff5c-ee70-4834-ada6-16d88406b9f4', 'scsi-SQEMU_QEMU_HARDDISK_1a9aff5c-ee70-4834-ada6-16d88406b9f4'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-01 00:59:23.520147 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--2f8eedd5--4e35--5081--a67e--565e77fef082-osd--block--2f8eedd5--4e35--5081--a67e--565e77fef082'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-IxVIHV-3xe3-l3il-mVFL-Ev2H-4sn6-FPVpoS', 'scsi-0QEMU_QEMU_HARDDISK_93c245f0-d55e-41f5-879e-2175ba1dd005', 'scsi-SQEMU_QEMU_HARDDISK_93c245f0-d55e-41f5-879e-2175ba1dd005'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-01 00:59:23.520154 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_289dd0c3-dff8-4236-9edf-8ec702693da7', 'scsi-SQEMU_QEMU_HARDDISK_289dd0c3-dff8-4236-9edf-8ec702693da7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-01 00:59:23.520178 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-01-00-03-50-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-01 00:59:23.520191 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-01-00-03-46-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-01 00:59:23.520198 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c7c10550--c1bc--5fe3--90d5--7d7a9167f51f-osd--block--c7c10550--c1bc--5fe3--90d5--7d7a9167f51f', 'dm-uuid-LVM-Jq2MIcpey21uNPOZEaO9KhTykiV3qU0ZJf4J3S8rWh1hJgZ67k96VkIqEvzh4OyU'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-01 00:59:23.520204 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d3162267--511d--5f73--a1c4--60a47e452e5f-osd--block--d3162267--511d--5f73--a1c4--60a47e452e5f', 'dm-uuid-LVM-6XbyFf6QbhKgKGPUkVKGPbWJ8VbkkOv366W0EKFsdJAkWsCELrMi62mRphvQtkxR'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-01 00:59:23.520211 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:59:23.520217 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.520223 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:59:23.520230 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:59:23.520241 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:59:23.520549 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:59:23.520573 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:59:23.520577 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:59:23.520582 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:59:23.520587 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:59:23.520591 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:59:23.520693 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bb71c8c2-ca64-4a55-b962-663cadefaf49', 'scsi-SQEMU_QEMU_HARDDISK_bb71c8c2-ca64-4a55-b962-663cadefaf49'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bb71c8c2-ca64-4a55-b962-663cadefaf49-part1', 'scsi-SQEMU_QEMU_HARDDISK_bb71c8c2-ca64-4a55-b962-663cadefaf49-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bb71c8c2-ca64-4a55-b962-663cadefaf49-part14', 'scsi-SQEMU_QEMU_HARDDISK_bb71c8c2-ca64-4a55-b962-663cadefaf49-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bb71c8c2-ca64-4a55-b962-663cadefaf49-part15', 'scsi-SQEMU_QEMU_HARDDISK_bb71c8c2-ca64-4a55-b962-663cadefaf49-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bb71c8c2-ca64-4a55-b962-663cadefaf49-part16', 'scsi-SQEMU_QEMU_HARDDISK_bb71c8c2-ca64-4a55-b962-663cadefaf49-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-01 00:59:23.520719 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--c7c10550--c1bc--5fe3--90d5--7d7a9167f51f-osd--block--c7c10550--c1bc--5fe3--90d5--7d7a9167f51f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-0Pgqrb-Y4oO-t51v-LUqF-Xfe4-tPEB-8uA0p8', 'scsi-0QEMU_QEMU_HARDDISK_e5794c61-1895-432b-bae0-e64b20adb363', 'scsi-SQEMU_QEMU_HARDDISK_e5794c61-1895-432b-bae0-e64b20adb363'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-01 00:59:23.520727 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:59:23.520755 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--d3162267--511d--5f73--a1c4--60a47e452e5f-osd--block--d3162267--511d--5f73--a1c4--60a47e452e5f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-X6NvH4-s8a1-fThR-cuqO-gA38-WCiF-j7Gb9y', 'scsi-0QEMU_QEMU_HARDDISK_b090d968-077f-4316-a7cb-bda539f6db67', 'scsi-SQEMU_QEMU_HARDDISK_b090d968-077f-4316-a7cb-bda539f6db67'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-01 00:59:23.520761 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:59:23.520767 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ac6b0a42-475e-47b3-b6b9-8775ae6256f7', 'scsi-SQEMU_QEMU_HARDDISK_ac6b0a42-475e-47b3-b6b9-8775ae6256f7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-01 00:59:23.520774 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:59:23.520855 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-01-00-03-43-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-01 00:59:23.520866 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.520873 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:59:23.520880 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:59:23.520886 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:59:23.520894 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_acd982d5-be51-4ada-8242-b77ed84f08a9', 'scsi-SQEMU_QEMU_HARDDISK_acd982d5-be51-4ada-8242-b77ed84f08a9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_acd982d5-be51-4ada-8242-b77ed84f08a9-part1', 'scsi-SQEMU_QEMU_HARDDISK_acd982d5-be51-4ada-8242-b77ed84f08a9-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_acd982d5-be51-4ada-8242-b77ed84f08a9-part14', 'scsi-SQEMU_QEMU_HARDDISK_acd982d5-be51-4ada-8242-b77ed84f08a9-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_acd982d5-be51-4ada-8242-b77ed84f08a9-part15', 'scsi-SQEMU_QEMU_HARDDISK_acd982d5-be51-4ada-8242-b77ed84f08a9-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_acd982d5-be51-4ada-8242-b77ed84f08a9-part16', 'scsi-SQEMU_QEMU_HARDDISK_acd982d5-be51-4ada-8242-b77ed84f08a9-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-01 00:59:23.520976 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-01-00-03-00-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-01 00:59:23.520987 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:59:23.520994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:59:23.521000 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:59:23.521006 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:59:23.521013 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.521019 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:59:23.521025 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.521031 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:59:23.521058 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:59:23.521073 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:59:23.521131 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a7e8e07f-8fa0-4520-8bbc-80ec122b709d', 'scsi-SQEMU_QEMU_HARDDISK_a7e8e07f-8fa0-4520-8bbc-80ec122b709d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a7e8e07f-8fa0-4520-8bbc-80ec122b709d-part1', 'scsi-SQEMU_QEMU_HARDDISK_a7e8e07f-8fa0-4520-8bbc-80ec122b709d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a7e8e07f-8fa0-4520-8bbc-80ec122b709d-part14', 'scsi-SQEMU_QEMU_HARDDISK_a7e8e07f-8fa0-4520-8bbc-80ec122b709d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a7e8e07f-8fa0-4520-8bbc-80ec122b709d-part15', 'scsi-SQEMU_QEMU_HARDDISK_a7e8e07f-8fa0-4520-8bbc-80ec122b709d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a7e8e07f-8fa0-4520-8bbc-80ec122b709d-part16', 'scsi-SQEMU_QEMU_HARDDISK_a7e8e07f-8fa0-4520-8bbc-80ec122b709d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-01 00:59:23.521139 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:59:23.521144 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-01-00-03-37-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-01 00:59:23.521148 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:59:23.521157 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.521161 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:59:23.521168 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:59:23.521203 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:59:23.521209 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:59:23.521213 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:59:23.521217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 00:59:23.521222 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_afbe02e5-eb4b-4e1e-8854-e9a45bf0751c', 'scsi-SQEMU_QEMU_HARDDISK_afbe02e5-eb4b-4e1e-8854-e9a45bf0751c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_afbe02e5-eb4b-4e1e-8854-e9a45bf0751c-part1', 'scsi-SQEMU_QEMU_HARDDISK_afbe02e5-eb4b-4e1e-8854-e9a45bf0751c-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_afbe02e5-eb4b-4e1e-8854-e9a45bf0751c-part14', 'scsi-SQEMU_QEMU_HARDDISK_afbe02e5-eb4b-4e1e-8854-e9a45bf0751c-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_afbe02e5-eb4b-4e1e-8854-e9a45bf0751c-part15', 'scsi-SQEMU_QEMU_HARDDISK_afbe02e5-eb4b-4e1e-8854-e9a45bf0751c-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_afbe02e5-eb4b-4e1e-8854-e9a45bf0751c-part16', 'scsi-SQEMU_QEMU_HARDDISK_afbe02e5-eb4b-4e1e-8854-e9a45bf0751c-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-01 00:59:23.521264 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-01-00-03-41-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-01 00:59:23.521270 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.521274 | orchestrator | 2026-04-01 00:59:23.521279 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-01 00:59:23.521284 | orchestrator | Wednesday 01 April 2026 00:49:26 +0000 (0:00:02.138) 0:00:33.720 ******* 2026-04-01 00:59:23.521290 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--070a6fcd--e232--5822--bdac--2856eb469583-osd--block--070a6fcd--e232--5822--bdac--2856eb469583', 'dm-uuid-LVM-XVYMn3IN00mdi6EnfVkPlw256qq9nI7912VpaCpkpbqfuvPtEYrqcEyji9q53KBz'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:59:23.521295 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--24dba708--820d--5543--af14--6cbe38251993-osd--block--24dba708--820d--5543--af14--6cbe38251993', 'dm-uuid-LVM-JQL58WVQQeGdBvo3KJNSREIYwthU36Keczsc7QaX34X6TCp6mDZGh2SdZgOENJGL'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:59:23.521299 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:59:23.521309 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:59:23.521314 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:59:23.521349 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:59:23.521355 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:59:23.521359 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:59:23.521363 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:59:23.521367 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:59:23.521420 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2a4f4914-3be5-4bda-a47c-01d9519cb486', 'scsi-SQEMU_QEMU_HARDDISK_2a4f4914-3be5-4bda-a47c-01d9519cb486'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2a4f4914-3be5-4bda-a47c-01d9519cb486-part1', 'scsi-SQEMU_QEMU_HARDDISK_2a4f4914-3be5-4bda-a47c-01d9519cb486-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2a4f4914-3be5-4bda-a47c-01d9519cb486-part14', 'scsi-SQEMU_QEMU_HARDDISK_2a4f4914-3be5-4bda-a47c-01d9519cb486-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2a4f4914-3be5-4bda-a47c-01d9519cb486-part15', 'scsi-SQEMU_QEMU_HARDDISK_2a4f4914-3be5-4bda-a47c-01d9519cb486-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2a4f4914-3be5-4bda-a47c-01d9519cb486-part16', 'scsi-SQEMU_QEMU_HARDDISK_2a4f4914-3be5-4bda-a47c-01d9519cb486-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:59:23.521427 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--070a6fcd--e232--5822--bdac--2856eb469583-osd--block--070a6fcd--e232--5822--bdac--2856eb469583'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-j3cUEk-BjBv-qffa-yDut-NG4M-uRvZ-xxhpE2', 'scsi-0QEMU_QEMU_HARDDISK_181eb0d3-49bd-41f1-8f26-95e9754c9896', 'scsi-SQEMU_QEMU_HARDDISK_181eb0d3-49bd-41f1-8f26-95e9754c9896'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:59:23.521433 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--00bcfd13--59f0--54da--b43f--34edf6af7c7d-osd--block--00bcfd13--59f0--54da--b43f--34edf6af7c7d', 'dm-uuid-LVM-iMX0SsshsPQVLScJsBqh3Uii0sRvXBeOCIRYfrnn2E2CJid3H0gSkexslogBax5C'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:59:23.521442 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--24dba708--820d--5543--af14--6cbe38251993-osd--block--24dba708--820d--5543--af14--6cbe38251993'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-oMZbdy-hpNd-YpXd-F35t-13ZE-ubGA-klAIbY', 'scsi-0QEMU_QEMU_HARDDISK_91aabfbd-d205-4d26-bb68-6c75b4d02402', 'scsi-SQEMU_QEMU_HARDDISK_91aabfbd-d205-4d26-bb68-6c75b4d02402'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:59:23.521498 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2f8eedd5--4e35--5081--a67e--565e77fef082-osd--block--2f8eedd5--4e35--5081--a67e--565e77fef082', 'dm-uuid-LVM-AzMBHf9V42Lz4YPHKNHAEEsPuJnHRSdJoTXEpZXZJVDV0MamSFteceMneZc4yeoD'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:59:23.521508 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a922e28e-6911-40ed-8ea7-c2624142d8a1', 'scsi-SQEMU_QEMU_HARDDISK_a922e28e-6911-40ed-8ea7-c2624142d8a1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:59:23.521514 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:59:23.521523 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-01-00-03-46-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:59:23.521538 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:59:23.521544 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:59:23.521612 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:59:23.521622 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.521629 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:59:23.521666 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:59:23.521671 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:59:23.521683 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:59:23.521794 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bc455e6b-b8ba-47d8-ab01-6be8b039ad3d', 'scsi-SQEMU_QEMU_HARDDISK_bc455e6b-b8ba-47d8-ab01-6be8b039ad3d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bc455e6b-b8ba-47d8-ab01-6be8b039ad3d-part1', 'scsi-SQEMU_QEMU_HARDDISK_bc455e6b-b8ba-47d8-ab01-6be8b039ad3d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bc455e6b-b8ba-47d8-ab01-6be8b039ad3d-part14', 'scsi-SQEMU_QEMU_HARDDISK_bc455e6b-b8ba-47d8-ab01-6be8b039ad3d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bc455e6b-b8ba-47d8-ab01-6be8b039ad3d-part15', 'scsi-SQEMU_QEMU_HARDDISK_bc455e6b-b8ba-47d8-ab01-6be8b039ad3d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bc455e6b-b8ba-47d8-ab01-6be8b039ad3d-part16', 'scsi-SQEMU_QEMU_HARDDISK_bc455e6b-b8ba-47d8-ab01-6be8b039ad3d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:59:23.521806 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--00bcfd13--59f0--54da--b43f--34edf6af7c7d-osd--block--00bcfd13--59f0--54da--b43f--34edf6af7c7d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-edqX2r-NIRK-P1Nk-DRh5-tSiQ-BYrO-Mo2mdM', 'scsi-0QEMU_QEMU_HARDDISK_1a9aff5c-ee70-4834-ada6-16d88406b9f4', 'scsi-SQEMU_QEMU_HARDDISK_1a9aff5c-ee70-4834-ada6-16d88406b9f4'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:59:23.521818 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--2f8eedd5--4e35--5081--a67e--565e77fef082-osd--block--2f8eedd5--4e35--5081--a67e--565e77fef082'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-IxVIHV-3xe3-l3il-mVFL-Ev2H-4sn6-FPVpoS', 'scsi-0QEMU_QEMU_HARDDISK_93c245f0-d55e-41f5-879e-2175ba1dd005', 'scsi-SQEMU_QEMU_HARDDISK_93c245f0-d55e-41f5-879e-2175ba1dd005'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:59:23.521824 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_289dd0c3-dff8-4236-9edf-8ec702693da7', 'scsi-SQEMU_QEMU_HARDDISK_289dd0c3-dff8-4236-9edf-8ec702693da7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:59:23.521880 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-01-00-03-50-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:59:23.521891 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c7c10550--c1bc--5fe3--90d5--7d7a9167f51f-osd--block--c7c10550--c1bc--5fe3--90d5--7d7a9167f51f', 'dm-uuid-LVM-Jq2MIcpey21uNPOZEaO9KhTykiV3qU0ZJf4J3S8rWh1hJgZ67k96VkIqEvzh4OyU'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:59:23.521909 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d3162267--511d--5f73--a1c4--60a47e452e5f-osd--block--d3162267--511d--5f73--a1c4--60a47e452e5f', 'dm-uuid-LVM-6XbyFf6QbhKgKGPUkVKGPbWJ8VbkkOv366W0EKFsdJAkWsCELrMi62mRphvQtkxR'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:59:23.521921 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:59:23.521928 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:59:23.521935 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:59:23.521991 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:59:23.522003 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:59:23.522009 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:59:23.522059 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:59:23.522070 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:59:23.522124 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bb71c8c2-ca64-4a55-b962-663cadefaf49', 'scsi-SQEMU_QEMU_HARDDISK_bb71c8c2-ca64-4a55-b962-663cadefaf49'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bb71c8c2-ca64-4a55-b962-663cadefaf49-part1', 'scsi-SQEMU_QEMU_HARDDISK_bb71c8c2-ca64-4a55-b962-663cadefaf49-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bb71c8c2-ca64-4a55-b962-663cadefaf49-part14', 'scsi-SQEMU_QEMU_HARDDISK_bb71c8c2-ca64-4a55-b962-663cadefaf49-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bb71c8c2-ca64-4a55-b962-663cadefaf49-part15', 'scsi-SQEMU_QEMU_HARDDISK_bb71c8c2-ca64-4a55-b962-663cadefaf49-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bb71c8c2-ca64-4a55-b962-663cadefaf49-part16', 'scsi-SQEMU_QEMU_HARDDISK_bb71c8c2-ca64-4a55-b962-663cadefaf49-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:59:23.522132 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--c7c10550--c1bc--5fe3--90d5--7d7a9167f51f-osd--block--c7c10550--c1bc--5fe3--90d5--7d7a9167f51f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-0Pgqrb-Y4oO-t51v-LUqF-Xfe4-tPEB-8uA0p8', 'scsi-0QEMU_QEMU_HARDDISK_e5794c61-1895-432b-bae0-e64b20adb363', 'scsi-SQEMU_QEMU_HARDDISK_e5794c61-1895-432b-bae0-e64b20adb363'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:59:23.522141 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--d3162267--511d--5f73--a1c4--60a47e452e5f-osd--block--d3162267--511d--5f73--a1c4--60a47e452e5f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-X6NvH4-s8a1-fThR-cuqO-gA38-WCiF-j7Gb9y', 'scsi-0QEMU_QEMU_HARDDISK_b090d968-077f-4316-a7cb-bda539f6db67', 'scsi-SQEMU_QEMU_HARDDISK_b090d968-077f-4316-a7cb-bda539f6db67'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:59:23.522145 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.522149 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ac6b0a42-475e-47b3-b6b9-8775ae6256f7', 'scsi-SQEMU_QEMU_HARDDISK_ac6b0a42-475e-47b3-b6b9-8775ae6256f7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:59:23.522189 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-01-00-03-43-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:59:23.522203 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:59:23.522224 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:59:23.522236 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:59:23.522242 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:59:23.522248 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:59:23.522259 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:59:23.522316 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:59:23.522326 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:59:23.522339 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_acd982d5-be51-4ada-8242-b77ed84f08a9', 'scsi-SQEMU_QEMU_HARDDISK_acd982d5-be51-4ada-8242-b77ed84f08a9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_acd982d5-be51-4ada-8242-b77ed84f08a9-part1', 'scsi-SQEMU_QEMU_HARDDISK_acd982d5-be51-4ada-8242-b77ed84f08a9-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_acd982d5-be51-4ada-8242-b77ed84f08a9-part14', 'scsi-SQEMU_QEMU_HARDDISK_acd982d5-be51-4ada-8242-b77ed84f08a9-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_acd982d5-be51-4ada-8242-b77ed84f08a9-part15', 'scsi-SQEMU_QEMU_HARDDISK_acd982d5-be51-4ada-8242-b77ed84f08a9-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_acd982d5-be51-4ada-8242-b77ed84f08a9-part16', 'scsi-SQEMU_QEMU_HARDDISK_acd982d5-be51-4ada-8242-b77ed84f08a9-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:59:23.522379 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-01-00-03-00-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:59:23.522389 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.522395 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:59:23.522401 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:59:23.522417 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:59:23.522424 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:59:23.522430 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:59:23.522437 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.522446 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:59:23.522495 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:59:23.522516 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:59:23.522529 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a7e8e07f-8fa0-4520-8bbc-80ec122b709d', 'scsi-SQEMU_QEMU_HARDDISK_a7e8e07f-8fa0-4520-8bbc-80ec122b709d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a7e8e07f-8fa0-4520-8bbc-80ec122b709d-part1', 'scsi-SQEMU_QEMU_HARDDISK_a7e8e07f-8fa0-4520-8bbc-80ec122b709d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a7e8e07f-8fa0-4520-8bbc-80ec122b709d-part14', 'scsi-SQEMU_QEMU_HARDDISK_a7e8e07f-8fa0-4520-8bbc-80ec122b709d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a7e8e07f-8fa0-4520-8bbc-80ec122b709d-part15', 'scsi-SQEMU_QEMU_HARDDISK_a7e8e07f-8fa0-4520-8bbc-80ec122b709d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a7e8e07f-8fa0-4520-8bbc-80ec122b709d-part16', 'scsi-SQEMU_QEMU_HARDDISK_a7e8e07f-8fa0-4520-8bbc-80ec122b709d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:59:23.522580 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-01-00-03-37-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:59:23.522589 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.522596 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:59:23.522607 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:59:23.522613 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:59:23.522619 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:59:23.522625 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:59:23.522631 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:59:23.522680 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:59:23.522689 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:59:23.522701 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_afbe02e5-eb4b-4e1e-8854-e9a45bf0751c', 'scsi-SQEMU_QEMU_HARDDISK_afbe02e5-eb4b-4e1e-8854-e9a45bf0751c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_afbe02e5-eb4b-4e1e-8854-e9a45bf0751c-part1', 'scsi-SQEMU_QEMU_HARDDISK_afbe02e5-eb4b-4e1e-8854-e9a45bf0751c-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_afbe02e5-eb4b-4e1e-8854-e9a45bf0751c-part14', 'scsi-SQEMU_QEMU_HARDDISK_afbe02e5-eb4b-4e1e-8854-e9a45bf0751c-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_afbe02e5-eb4b-4e1e-8854-e9a45bf0751c-part15', 'scsi-SQEMU_QEMU_HARDDISK_afbe02e5-eb4b-4e1e-8854-e9a45bf0751c-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_afbe02e5-eb4b-4e1e-8854-e9a45bf0751c-part16', 'scsi-SQEMU_QEMU_HARDDISK_afbe02e5-eb4b-4e1e-8854-e9a45bf0751c-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:59:23.522711 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-01-00-03-41-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 00:59:23.522719 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.522725 | orchestrator | 2026-04-01 00:59:23.522783 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-01 00:59:23.522792 | orchestrator | Wednesday 01 April 2026 00:49:28 +0000 (0:00:01.608) 0:00:35.329 ******* 2026-04-01 00:59:23.522798 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:59:23.522805 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:59:23.522816 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:59:23.522822 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:59:23.522827 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:59:23.522833 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:59:23.522838 | orchestrator | 2026-04-01 00:59:23.522844 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-01 00:59:23.522849 | orchestrator | Wednesday 01 April 2026 00:49:30 +0000 (0:00:02.169) 0:00:37.499 ******* 2026-04-01 00:59:23.522855 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:59:23.522861 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:59:23.522867 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:59:23.522873 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:59:23.522880 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:59:23.522886 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:59:23.522892 | orchestrator | 2026-04-01 00:59:23.522897 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-01 00:59:23.522903 | orchestrator | Wednesday 01 April 2026 00:49:31 +0000 (0:00:00.925) 0:00:38.424 ******* 2026-04-01 00:59:23.522909 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.522915 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.522921 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.522927 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.522934 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.522940 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.522946 | orchestrator | 2026-04-01 00:59:23.522952 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-01 00:59:23.522958 | orchestrator | Wednesday 01 April 2026 00:49:32 +0000 (0:00:00.981) 0:00:39.406 ******* 2026-04-01 00:59:23.522964 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.522970 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.522977 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.522983 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.522989 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.522994 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.523000 | orchestrator | 2026-04-01 00:59:23.523007 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-01 00:59:23.523013 | orchestrator | Wednesday 01 April 2026 00:49:33 +0000 (0:00:01.262) 0:00:40.669 ******* 2026-04-01 00:59:23.523019 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.523143 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.523155 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.523161 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.523168 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.523173 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.523179 | orchestrator | 2026-04-01 00:59:23.523184 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-01 00:59:23.523190 | orchestrator | Wednesday 01 April 2026 00:49:34 +0000 (0:00:01.306) 0:00:41.975 ******* 2026-04-01 00:59:23.523197 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.523202 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.523208 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.523215 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.523221 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.523227 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.523231 | orchestrator | 2026-04-01 00:59:23.523235 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-01 00:59:23.523239 | orchestrator | Wednesday 01 April 2026 00:49:35 +0000 (0:00:01.170) 0:00:43.146 ******* 2026-04-01 00:59:23.523246 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-04-01 00:59:23.523253 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-04-01 00:59:23.523259 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-04-01 00:59:23.523268 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-04-01 00:59:23.523277 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-01 00:59:23.523294 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-01 00:59:23.523300 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-04-01 00:59:23.523306 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-04-01 00:59:23.523311 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-04-01 00:59:23.523317 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-04-01 00:59:23.523324 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-01 00:59:23.523331 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-04-01 00:59:23.523337 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-04-01 00:59:23.523342 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-04-01 00:59:23.523348 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-04-01 00:59:23.523354 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-04-01 00:59:23.523361 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-04-01 00:59:23.523368 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-04-01 00:59:23.523375 | orchestrator | 2026-04-01 00:59:23.523381 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-01 00:59:23.523387 | orchestrator | Wednesday 01 April 2026 00:49:40 +0000 (0:00:04.216) 0:00:47.362 ******* 2026-04-01 00:59:23.523394 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-01 00:59:23.523400 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-01 00:59:23.523405 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-01 00:59:23.523411 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.523417 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-01 00:59:23.523424 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-01 00:59:23.523436 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-01 00:59:23.523443 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.523449 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-01 00:59:23.523506 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-01 00:59:23.523514 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-01 00:59:23.523519 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.523525 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-01 00:59:23.523530 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-01 00:59:23.523536 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-01 00:59:23.523541 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-01 00:59:23.523547 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-01 00:59:23.523553 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-01 00:59:23.523559 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.523565 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.523572 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-01 00:59:23.523578 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-01 00:59:23.523585 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-01 00:59:23.523590 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.523597 | orchestrator | 2026-04-01 00:59:23.523602 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-01 00:59:23.523608 | orchestrator | Wednesday 01 April 2026 00:49:41 +0000 (0:00:01.078) 0:00:48.441 ******* 2026-04-01 00:59:23.523614 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.523619 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.523625 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.523633 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:59:23.523647 | orchestrator | 2026-04-01 00:59:23.523653 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-01 00:59:23.523662 | orchestrator | Wednesday 01 April 2026 00:49:42 +0000 (0:00:01.054) 0:00:49.495 ******* 2026-04-01 00:59:23.523667 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.523674 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.523679 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.523684 | orchestrator | 2026-04-01 00:59:23.523690 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-01 00:59:23.523696 | orchestrator | Wednesday 01 April 2026 00:49:42 +0000 (0:00:00.383) 0:00:49.879 ******* 2026-04-01 00:59:23.523702 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.523708 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.523714 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.523719 | orchestrator | 2026-04-01 00:59:23.523725 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-01 00:59:23.523773 | orchestrator | Wednesday 01 April 2026 00:49:42 +0000 (0:00:00.359) 0:00:50.238 ******* 2026-04-01 00:59:23.523780 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.523786 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.523793 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.523798 | orchestrator | 2026-04-01 00:59:23.523804 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-01 00:59:23.523810 | orchestrator | Wednesday 01 April 2026 00:49:43 +0000 (0:00:00.428) 0:00:50.666 ******* 2026-04-01 00:59:23.523816 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:59:23.523821 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:59:23.523827 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:59:23.523833 | orchestrator | 2026-04-01 00:59:23.523839 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-01 00:59:23.523845 | orchestrator | Wednesday 01 April 2026 00:49:44 +0000 (0:00:00.729) 0:00:51.396 ******* 2026-04-01 00:59:23.523851 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-01 00:59:23.523857 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-01 00:59:23.523863 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-01 00:59:23.523869 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.523875 | orchestrator | 2026-04-01 00:59:23.523881 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-01 00:59:23.523887 | orchestrator | Wednesday 01 April 2026 00:49:44 +0000 (0:00:00.360) 0:00:51.756 ******* 2026-04-01 00:59:23.523893 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-01 00:59:23.523900 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-01 00:59:23.523906 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-01 00:59:23.523912 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.523918 | orchestrator | 2026-04-01 00:59:23.523924 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-01 00:59:23.523931 | orchestrator | Wednesday 01 April 2026 00:49:44 +0000 (0:00:00.372) 0:00:52.129 ******* 2026-04-01 00:59:23.523937 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-01 00:59:23.523943 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-01 00:59:23.523949 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-01 00:59:23.523955 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.523961 | orchestrator | 2026-04-01 00:59:23.523967 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-01 00:59:23.523973 | orchestrator | Wednesday 01 April 2026 00:49:45 +0000 (0:00:00.284) 0:00:52.414 ******* 2026-04-01 00:59:23.523979 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:59:23.523985 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:59:23.523992 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:59:23.523998 | orchestrator | 2026-04-01 00:59:23.524013 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-01 00:59:23.524025 | orchestrator | Wednesday 01 April 2026 00:49:45 +0000 (0:00:00.344) 0:00:52.758 ******* 2026-04-01 00:59:23.524032 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-01 00:59:23.524039 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-01 00:59:23.524084 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-01 00:59:23.524093 | orchestrator | 2026-04-01 00:59:23.524103 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-01 00:59:23.524110 | orchestrator | Wednesday 01 April 2026 00:49:46 +0000 (0:00:00.645) 0:00:53.404 ******* 2026-04-01 00:59:23.524118 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-01 00:59:23.524124 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-01 00:59:23.524131 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-01 00:59:23.524137 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-01 00:59:23.524144 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-01 00:59:23.524151 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-01 00:59:23.524158 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-01 00:59:23.524163 | orchestrator | 2026-04-01 00:59:23.524169 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-01 00:59:23.524175 | orchestrator | Wednesday 01 April 2026 00:49:46 +0000 (0:00:00.783) 0:00:54.188 ******* 2026-04-01 00:59:23.524180 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-01 00:59:23.524186 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-01 00:59:23.524192 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-01 00:59:23.524197 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-01 00:59:23.524202 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-01 00:59:23.524208 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-01 00:59:23.524214 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-01 00:59:23.524220 | orchestrator | 2026-04-01 00:59:23.524225 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-01 00:59:23.524233 | orchestrator | Wednesday 01 April 2026 00:49:48 +0000 (0:00:01.975) 0:00:56.164 ******* 2026-04-01 00:59:23.524240 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:59:23.524247 | orchestrator | 2026-04-01 00:59:23.524253 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-01 00:59:23.524261 | orchestrator | Wednesday 01 April 2026 00:49:49 +0000 (0:00:00.929) 0:00:57.093 ******* 2026-04-01 00:59:23.524266 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:59:23.524272 | orchestrator | 2026-04-01 00:59:23.524278 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-01 00:59:23.524284 | orchestrator | Wednesday 01 April 2026 00:49:50 +0000 (0:00:01.090) 0:00:58.184 ******* 2026-04-01 00:59:23.524290 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.524296 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.524302 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.524308 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:59:23.524315 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:59:23.524322 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:59:23.524336 | orchestrator | 2026-04-01 00:59:23.524343 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-01 00:59:23.524349 | orchestrator | Wednesday 01 April 2026 00:49:52 +0000 (0:00:01.255) 0:00:59.439 ******* 2026-04-01 00:59:23.524355 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:59:23.524360 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.524366 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:59:23.524372 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.524378 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.524384 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:59:23.524390 | orchestrator | 2026-04-01 00:59:23.524396 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-01 00:59:23.524402 | orchestrator | Wednesday 01 April 2026 00:49:53 +0000 (0:00:00.976) 0:01:00.416 ******* 2026-04-01 00:59:23.524407 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.524413 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.524420 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:59:23.524426 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.524432 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:59:23.524437 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:59:23.524445 | orchestrator | 2026-04-01 00:59:23.524453 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-01 00:59:23.524459 | orchestrator | Wednesday 01 April 2026 00:49:53 +0000 (0:00:00.576) 0:01:00.992 ******* 2026-04-01 00:59:23.524465 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:59:23.524471 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:59:23.524477 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.524483 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.524488 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.524494 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:59:23.524500 | orchestrator | 2026-04-01 00:59:23.524506 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-01 00:59:23.524512 | orchestrator | Wednesday 01 April 2026 00:49:54 +0000 (0:00:00.820) 0:01:01.813 ******* 2026-04-01 00:59:23.524531 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.524538 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.524544 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.524551 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:59:23.524557 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:59:23.524600 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:59:23.524608 | orchestrator | 2026-04-01 00:59:23.524614 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-01 00:59:23.524621 | orchestrator | Wednesday 01 April 2026 00:49:55 +0000 (0:00:00.931) 0:01:02.744 ******* 2026-04-01 00:59:23.524628 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.524638 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.524643 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.524649 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.524655 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.524660 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.524666 | orchestrator | 2026-04-01 00:59:23.524671 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-01 00:59:23.524677 | orchestrator | Wednesday 01 April 2026 00:49:56 +0000 (0:00:00.743) 0:01:03.487 ******* 2026-04-01 00:59:23.524683 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.524689 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.524696 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.524702 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.524709 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.524715 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.524721 | orchestrator | 2026-04-01 00:59:23.524727 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-01 00:59:23.524754 | orchestrator | Wednesday 01 April 2026 00:49:56 +0000 (0:00:00.740) 0:01:04.228 ******* 2026-04-01 00:59:23.524769 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:59:23.524775 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:59:23.524780 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:59:23.524786 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:59:23.524792 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:59:23.524798 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:59:23.524804 | orchestrator | 2026-04-01 00:59:23.524810 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-01 00:59:23.524817 | orchestrator | Wednesday 01 April 2026 00:49:58 +0000 (0:00:01.168) 0:01:05.397 ******* 2026-04-01 00:59:23.524823 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:59:23.524829 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:59:23.524835 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:59:23.524841 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:59:23.524848 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:59:23.524854 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:59:23.524860 | orchestrator | 2026-04-01 00:59:23.524866 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-01 00:59:23.524872 | orchestrator | Wednesday 01 April 2026 00:49:58 +0000 (0:00:00.879) 0:01:06.276 ******* 2026-04-01 00:59:23.524878 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.524884 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.524891 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.524897 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.524903 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.524910 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.524916 | orchestrator | 2026-04-01 00:59:23.524923 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-01 00:59:23.524929 | orchestrator | Wednesday 01 April 2026 00:49:59 +0000 (0:00:00.790) 0:01:07.067 ******* 2026-04-01 00:59:23.524935 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.524941 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.524948 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.524953 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:59:23.524959 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:59:23.524965 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:59:23.524972 | orchestrator | 2026-04-01 00:59:23.524979 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-01 00:59:23.524985 | orchestrator | Wednesday 01 April 2026 00:50:00 +0000 (0:00:00.811) 0:01:07.879 ******* 2026-04-01 00:59:23.524991 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:59:23.524998 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:59:23.525004 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:59:23.525010 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.525016 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.525022 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.525029 | orchestrator | 2026-04-01 00:59:23.525034 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-01 00:59:23.525040 | orchestrator | Wednesday 01 April 2026 00:50:02 +0000 (0:00:01.711) 0:01:09.591 ******* 2026-04-01 00:59:23.525047 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:59:23.525053 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:59:23.525059 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:59:23.525065 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.525072 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.525078 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.525084 | orchestrator | 2026-04-01 00:59:23.525090 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-01 00:59:23.525097 | orchestrator | Wednesday 01 April 2026 00:50:02 +0000 (0:00:00.546) 0:01:10.137 ******* 2026-04-01 00:59:23.525103 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:59:23.525109 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:59:23.525116 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:59:23.525122 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.525127 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.525140 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.525147 | orchestrator | 2026-04-01 00:59:23.525153 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-01 00:59:23.525159 | orchestrator | Wednesday 01 April 2026 00:50:03 +0000 (0:00:00.886) 0:01:11.024 ******* 2026-04-01 00:59:23.525165 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.525172 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.525178 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.525184 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.525190 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.525197 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.525203 | orchestrator | 2026-04-01 00:59:23.525209 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-01 00:59:23.525221 | orchestrator | Wednesday 01 April 2026 00:50:04 +0000 (0:00:00.778) 0:01:11.802 ******* 2026-04-01 00:59:23.525227 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.525234 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.525240 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.525247 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.525282 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.525289 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.525295 | orchestrator | 2026-04-01 00:59:23.525301 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-01 00:59:23.525307 | orchestrator | Wednesday 01 April 2026 00:50:05 +0000 (0:00:00.906) 0:01:12.708 ******* 2026-04-01 00:59:23.525313 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.525320 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.525327 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.525333 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:59:23.525339 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:59:23.525345 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:59:23.525352 | orchestrator | 2026-04-01 00:59:23.525358 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-01 00:59:23.525364 | orchestrator | Wednesday 01 April 2026 00:50:06 +0000 (0:00:00.621) 0:01:13.329 ******* 2026-04-01 00:59:23.525370 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:59:23.525376 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:59:23.525383 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:59:23.525388 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:59:23.525394 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:59:23.525401 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:59:23.525407 | orchestrator | 2026-04-01 00:59:23.525414 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-01 00:59:23.525420 | orchestrator | Wednesday 01 April 2026 00:50:06 +0000 (0:00:00.845) 0:01:14.175 ******* 2026-04-01 00:59:23.525426 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:59:23.525432 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:59:23.525439 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:59:23.525445 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:59:23.525451 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:59:23.525457 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:59:23.525463 | orchestrator | 2026-04-01 00:59:23.525469 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-01 00:59:23.525474 | orchestrator | Wednesday 01 April 2026 00:50:08 +0000 (0:00:01.337) 0:01:15.512 ******* 2026-04-01 00:59:23.525481 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:59:23.525487 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:59:23.525493 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:59:23.525500 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:59:23.525506 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:59:23.525512 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:59:23.525518 | orchestrator | 2026-04-01 00:59:23.525525 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-01 00:59:23.525531 | orchestrator | Wednesday 01 April 2026 00:50:09 +0000 (0:00:01.555) 0:01:17.068 ******* 2026-04-01 00:59:23.525543 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:59:23.525549 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:59:23.525556 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:59:23.525562 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:59:23.525568 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:59:23.525574 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:59:23.525580 | orchestrator | 2026-04-01 00:59:23.525587 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-01 00:59:23.525593 | orchestrator | Wednesday 01 April 2026 00:50:12 +0000 (0:00:02.746) 0:01:19.814 ******* 2026-04-01 00:59:23.525600 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:59:23.525606 | orchestrator | 2026-04-01 00:59:23.525611 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-01 00:59:23.525617 | orchestrator | Wednesday 01 April 2026 00:50:13 +0000 (0:00:01.133) 0:01:20.947 ******* 2026-04-01 00:59:23.525623 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.525629 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.525635 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.525640 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.525646 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.525653 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.525659 | orchestrator | 2026-04-01 00:59:23.525665 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-01 00:59:23.525671 | orchestrator | Wednesday 01 April 2026 00:50:14 +0000 (0:00:00.771) 0:01:21.718 ******* 2026-04-01 00:59:23.525677 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.525683 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.525689 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.525693 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.525697 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.525701 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.525705 | orchestrator | 2026-04-01 00:59:23.525709 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-01 00:59:23.525713 | orchestrator | Wednesday 01 April 2026 00:50:15 +0000 (0:00:00.837) 0:01:22.556 ******* 2026-04-01 00:59:23.525716 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-01 00:59:23.525720 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-01 00:59:23.525724 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-01 00:59:23.525728 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-01 00:59:23.525748 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-01 00:59:23.525755 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-01 00:59:23.525761 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-01 00:59:23.525767 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-01 00:59:23.525777 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-01 00:59:23.525784 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-01 00:59:23.525814 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-01 00:59:23.525819 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-01 00:59:23.525823 | orchestrator | 2026-04-01 00:59:23.525827 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-01 00:59:23.525831 | orchestrator | Wednesday 01 April 2026 00:50:16 +0000 (0:00:01.640) 0:01:24.197 ******* 2026-04-01 00:59:23.525839 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:59:23.525843 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:59:23.525847 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:59:23.525851 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:59:23.525855 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:59:23.525859 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:59:23.525862 | orchestrator | 2026-04-01 00:59:23.525866 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-01 00:59:23.525870 | orchestrator | Wednesday 01 April 2026 00:50:17 +0000 (0:00:01.086) 0:01:25.283 ******* 2026-04-01 00:59:23.525874 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.525878 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.525882 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.525886 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.525890 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.525893 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.525897 | orchestrator | 2026-04-01 00:59:23.525901 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-01 00:59:23.525905 | orchestrator | Wednesday 01 April 2026 00:50:18 +0000 (0:00:00.879) 0:01:26.163 ******* 2026-04-01 00:59:23.525909 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.525913 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.525916 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.525920 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.525924 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.525928 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.525932 | orchestrator | 2026-04-01 00:59:23.525936 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-01 00:59:23.525940 | orchestrator | Wednesday 01 April 2026 00:50:19 +0000 (0:00:00.602) 0:01:26.766 ******* 2026-04-01 00:59:23.525944 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.525948 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.525951 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.525955 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.525961 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.525967 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.525973 | orchestrator | 2026-04-01 00:59:23.525978 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-01 00:59:23.525984 | orchestrator | Wednesday 01 April 2026 00:50:20 +0000 (0:00:00.705) 0:01:27.471 ******* 2026-04-01 00:59:23.525990 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:59:23.525996 | orchestrator | 2026-04-01 00:59:23.526002 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-01 00:59:23.526009 | orchestrator | Wednesday 01 April 2026 00:50:21 +0000 (0:00:00.956) 0:01:28.428 ******* 2026-04-01 00:59:23.526051 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:59:23.526058 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:59:23.526064 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:59:23.526070 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:59:23.526075 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:59:23.526080 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:59:23.526090 | orchestrator | 2026-04-01 00:59:23.526097 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-01 00:59:23.526105 | orchestrator | Wednesday 01 April 2026 00:51:14 +0000 (0:00:53.128) 0:02:21.557 ******* 2026-04-01 00:59:23.526111 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-01 00:59:23.526118 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-01 00:59:23.526123 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-01 00:59:23.526130 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.526145 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-01 00:59:23.526151 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-01 00:59:23.526157 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-01 00:59:23.526163 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.526169 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-01 00:59:23.526174 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-01 00:59:23.526180 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-01 00:59:23.526185 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.526191 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-01 00:59:23.526197 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-01 00:59:23.526203 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-01 00:59:23.526208 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.526214 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-01 00:59:23.526220 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-01 00:59:23.526230 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-01 00:59:23.526235 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.526269 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-01 00:59:23.526276 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-01 00:59:23.526282 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-01 00:59:23.526287 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.526293 | orchestrator | 2026-04-01 00:59:23.526299 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-01 00:59:23.526305 | orchestrator | Wednesday 01 April 2026 00:51:14 +0000 (0:00:00.693) 0:02:22.250 ******* 2026-04-01 00:59:23.526311 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.526317 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.526323 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.526329 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.526335 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.526341 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.526348 | orchestrator | 2026-04-01 00:59:23.526352 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-01 00:59:23.526356 | orchestrator | Wednesday 01 April 2026 00:51:15 +0000 (0:00:00.494) 0:02:22.745 ******* 2026-04-01 00:59:23.526360 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.526365 | orchestrator | 2026-04-01 00:59:23.526368 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-01 00:59:23.526372 | orchestrator | Wednesday 01 April 2026 00:51:15 +0000 (0:00:00.115) 0:02:22.860 ******* 2026-04-01 00:59:23.526376 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.526380 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.526384 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.526388 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.526392 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.526395 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.526399 | orchestrator | 2026-04-01 00:59:23.526403 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-01 00:59:23.526407 | orchestrator | Wednesday 01 April 2026 00:51:16 +0000 (0:00:00.795) 0:02:23.655 ******* 2026-04-01 00:59:23.526411 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.526415 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.526420 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.526432 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.526438 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.526447 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.526455 | orchestrator | 2026-04-01 00:59:23.526460 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-01 00:59:23.526466 | orchestrator | Wednesday 01 April 2026 00:51:16 +0000 (0:00:00.487) 0:02:24.142 ******* 2026-04-01 00:59:23.526472 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.526478 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.526484 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.526490 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.526496 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.526501 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.526506 | orchestrator | 2026-04-01 00:59:23.526512 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-01 00:59:23.526517 | orchestrator | Wednesday 01 April 2026 00:51:17 +0000 (0:00:00.775) 0:02:24.918 ******* 2026-04-01 00:59:23.526522 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:59:23.526528 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:59:23.526533 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:59:23.526538 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:59:23.526545 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:59:23.526551 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:59:23.526557 | orchestrator | 2026-04-01 00:59:23.526563 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-01 00:59:23.526568 | orchestrator | Wednesday 01 April 2026 00:51:20 +0000 (0:00:03.362) 0:02:28.281 ******* 2026-04-01 00:59:23.526575 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:59:23.526581 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:59:23.526586 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:59:23.526592 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:59:23.526597 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:59:23.526603 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:59:23.526608 | orchestrator | 2026-04-01 00:59:23.526614 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-01 00:59:23.526620 | orchestrator | Wednesday 01 April 2026 00:51:21 +0000 (0:00:00.879) 0:02:29.160 ******* 2026-04-01 00:59:23.526628 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:59:23.526636 | orchestrator | 2026-04-01 00:59:23.526642 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-01 00:59:23.526648 | orchestrator | Wednesday 01 April 2026 00:51:23 +0000 (0:00:01.195) 0:02:30.355 ******* 2026-04-01 00:59:23.526656 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.526667 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.526673 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.526678 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.526684 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.526690 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.526697 | orchestrator | 2026-04-01 00:59:23.526703 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-01 00:59:23.526711 | orchestrator | Wednesday 01 April 2026 00:51:23 +0000 (0:00:00.675) 0:02:31.031 ******* 2026-04-01 00:59:23.526715 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.526720 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.526726 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.526748 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.526754 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.526760 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.526765 | orchestrator | 2026-04-01 00:59:23.526771 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-01 00:59:23.526784 | orchestrator | Wednesday 01 April 2026 00:51:24 +0000 (0:00:00.870) 0:02:31.901 ******* 2026-04-01 00:59:23.526797 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.526803 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.526847 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.526853 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.526858 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.526863 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.526869 | orchestrator | 2026-04-01 00:59:23.526874 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-01 00:59:23.526879 | orchestrator | Wednesday 01 April 2026 00:51:25 +0000 (0:00:00.611) 0:02:32.513 ******* 2026-04-01 00:59:23.526888 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.526896 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.526902 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.526908 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.526914 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.526920 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.526925 | orchestrator | 2026-04-01 00:59:23.526930 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-01 00:59:23.526936 | orchestrator | Wednesday 01 April 2026 00:51:25 +0000 (0:00:00.768) 0:02:33.281 ******* 2026-04-01 00:59:23.526942 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.526947 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.526953 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.526958 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.526964 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.526970 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.526976 | orchestrator | 2026-04-01 00:59:23.526981 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-01 00:59:23.526987 | orchestrator | Wednesday 01 April 2026 00:51:26 +0000 (0:00:00.584) 0:02:33.866 ******* 2026-04-01 00:59:23.526992 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.526998 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.527003 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.527009 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.527015 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.527020 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.527026 | orchestrator | 2026-04-01 00:59:23.527031 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-01 00:59:23.527037 | orchestrator | Wednesday 01 April 2026 00:51:27 +0000 (0:00:00.782) 0:02:34.648 ******* 2026-04-01 00:59:23.527042 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.527047 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.527053 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.527058 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.527064 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.527069 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.527074 | orchestrator | 2026-04-01 00:59:23.527080 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-01 00:59:23.527085 | orchestrator | Wednesday 01 April 2026 00:51:27 +0000 (0:00:00.605) 0:02:35.254 ******* 2026-04-01 00:59:23.527090 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.527096 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.527103 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.527108 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.527114 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.527120 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.527126 | orchestrator | 2026-04-01 00:59:23.527132 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-01 00:59:23.527137 | orchestrator | Wednesday 01 April 2026 00:51:28 +0000 (0:00:00.744) 0:02:35.999 ******* 2026-04-01 00:59:23.527143 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:59:23.527149 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:59:23.527155 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:59:23.527161 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:59:23.527173 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:59:23.527179 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:59:23.527185 | orchestrator | 2026-04-01 00:59:23.527191 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-01 00:59:23.527197 | orchestrator | Wednesday 01 April 2026 00:51:29 +0000 (0:00:00.968) 0:02:36.967 ******* 2026-04-01 00:59:23.527204 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:59:23.527212 | orchestrator | 2026-04-01 00:59:23.527216 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-01 00:59:23.527222 | orchestrator | Wednesday 01 April 2026 00:51:30 +0000 (0:00:01.187) 0:02:38.155 ******* 2026-04-01 00:59:23.527228 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-04-01 00:59:23.527234 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-04-01 00:59:23.527243 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-04-01 00:59:23.527252 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-04-01 00:59:23.527257 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-04-01 00:59:23.527263 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-04-01 00:59:23.527269 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-04-01 00:59:23.527275 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-04-01 00:59:23.527281 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-04-01 00:59:23.527287 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-04-01 00:59:23.527292 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-04-01 00:59:23.527299 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-04-01 00:59:23.527305 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-04-01 00:59:23.527310 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-04-01 00:59:23.527316 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-04-01 00:59:23.527322 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-04-01 00:59:23.527334 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-04-01 00:59:23.527341 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-04-01 00:59:23.527384 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-04-01 00:59:23.527391 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-04-01 00:59:23.527396 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-04-01 00:59:23.527402 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-04-01 00:59:23.527407 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-04-01 00:59:23.527413 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-04-01 00:59:23.527419 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-04-01 00:59:23.527424 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-04-01 00:59:23.527430 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-04-01 00:59:23.527436 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-04-01 00:59:23.527442 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-04-01 00:59:23.527449 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-04-01 00:59:23.527455 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-04-01 00:59:23.527460 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-04-01 00:59:23.527466 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-04-01 00:59:23.527474 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-04-01 00:59:23.527479 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-04-01 00:59:23.527493 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-04-01 00:59:23.527500 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-04-01 00:59:23.527507 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-04-01 00:59:23.527512 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-04-01 00:59:23.527515 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-04-01 00:59:23.527519 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-04-01 00:59:23.527523 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-04-01 00:59:23.527527 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-04-01 00:59:23.527530 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-04-01 00:59:23.527535 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-01 00:59:23.527541 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-01 00:59:23.527546 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-01 00:59:23.527552 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-04-01 00:59:23.527557 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-04-01 00:59:23.527562 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-04-01 00:59:23.527567 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-01 00:59:23.527573 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-01 00:59:23.527578 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-01 00:59:23.527583 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-01 00:59:23.527589 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-01 00:59:23.527594 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-04-01 00:59:23.527600 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-01 00:59:23.527605 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-01 00:59:23.527611 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-01 00:59:23.527617 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-01 00:59:23.527623 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-01 00:59:23.527629 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-01 00:59:23.527633 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-01 00:59:23.527637 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-01 00:59:23.527641 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-01 00:59:23.527645 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-01 00:59:23.527651 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-01 00:59:23.527660 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-01 00:59:23.527668 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-01 00:59:23.527673 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-01 00:59:23.527679 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-01 00:59:23.527685 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-01 00:59:23.527691 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-01 00:59:23.527697 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-01 00:59:23.527715 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-01 00:59:23.527721 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-01 00:59:23.527822 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-01 00:59:23.527832 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-01 00:59:23.527838 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-01 00:59:23.527844 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-04-01 00:59:23.527850 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-01 00:59:23.527856 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-04-01 00:59:23.527862 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-04-01 00:59:23.527868 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-01 00:59:23.527874 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-04-01 00:59:23.527879 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-01 00:59:23.527885 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-01 00:59:23.527891 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-04-01 00:59:23.527897 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-04-01 00:59:23.527903 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-04-01 00:59:23.527909 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-04-01 00:59:23.527915 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-01 00:59:23.527921 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-04-01 00:59:23.527927 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-04-01 00:59:23.527933 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-04-01 00:59:23.527939 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-04-01 00:59:23.527945 | orchestrator | 2026-04-01 00:59:23.527951 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-01 00:59:23.527958 | orchestrator | Wednesday 01 April 2026 00:51:38 +0000 (0:00:07.208) 0:02:45.363 ******* 2026-04-01 00:59:23.527964 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.527970 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.527976 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.527984 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:59:23.527991 | orchestrator | 2026-04-01 00:59:23.528018 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-04-01 00:59:23.528023 | orchestrator | Wednesday 01 April 2026 00:51:38 +0000 (0:00:00.933) 0:02:46.298 ******* 2026-04-01 00:59:23.528030 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-01 00:59:23.528036 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-01 00:59:23.528043 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-01 00:59:23.528049 | orchestrator | 2026-04-01 00:59:23.528055 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-04-01 00:59:23.528061 | orchestrator | Wednesday 01 April 2026 00:51:39 +0000 (0:00:00.736) 0:02:47.034 ******* 2026-04-01 00:59:23.528067 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-01 00:59:23.528074 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-01 00:59:23.528081 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-01 00:59:23.528096 | orchestrator | 2026-04-01 00:59:23.528103 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-01 00:59:23.528109 | orchestrator | Wednesday 01 April 2026 00:51:40 +0000 (0:00:01.203) 0:02:48.238 ******* 2026-04-01 00:59:23.528115 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:59:23.528121 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:59:23.528125 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:59:23.528129 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.528135 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.528141 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.528147 | orchestrator | 2026-04-01 00:59:23.528153 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-01 00:59:23.528158 | orchestrator | Wednesday 01 April 2026 00:51:41 +0000 (0:00:00.745) 0:02:48.984 ******* 2026-04-01 00:59:23.528169 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:59:23.528176 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:59:23.528184 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:59:23.528190 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.528195 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.528201 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.528207 | orchestrator | 2026-04-01 00:59:23.528213 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-01 00:59:23.528218 | orchestrator | Wednesday 01 April 2026 00:51:42 +0000 (0:00:00.505) 0:02:49.489 ******* 2026-04-01 00:59:23.528225 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.528231 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.528236 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.528249 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.528255 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.528262 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.528268 | orchestrator | 2026-04-01 00:59:23.528309 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-01 00:59:23.528315 | orchestrator | Wednesday 01 April 2026 00:51:42 +0000 (0:00:00.627) 0:02:50.117 ******* 2026-04-01 00:59:23.528321 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.528327 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.528332 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.528337 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.528342 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.528347 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.528353 | orchestrator | 2026-04-01 00:59:23.528359 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-01 00:59:23.528365 | orchestrator | Wednesday 01 April 2026 00:51:43 +0000 (0:00:00.503) 0:02:50.621 ******* 2026-04-01 00:59:23.528370 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.528375 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.528380 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.528385 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.528391 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.528396 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.528401 | orchestrator | 2026-04-01 00:59:23.528407 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-01 00:59:23.528413 | orchestrator | Wednesday 01 April 2026 00:51:44 +0000 (0:00:00.704) 0:02:51.326 ******* 2026-04-01 00:59:23.528418 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.528424 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.528429 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.528436 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.528441 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.528446 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.528452 | orchestrator | 2026-04-01 00:59:23.528459 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-01 00:59:23.528471 | orchestrator | Wednesday 01 April 2026 00:51:44 +0000 (0:00:00.639) 0:02:51.965 ******* 2026-04-01 00:59:23.528477 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.528482 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.528488 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.528494 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.528501 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.528507 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.528512 | orchestrator | 2026-04-01 00:59:23.528518 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-01 00:59:23.528523 | orchestrator | Wednesday 01 April 2026 00:51:45 +0000 (0:00:00.608) 0:02:52.574 ******* 2026-04-01 00:59:23.528529 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.528535 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.528541 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.528547 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.528554 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.528558 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.528562 | orchestrator | 2026-04-01 00:59:23.528566 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-01 00:59:23.528570 | orchestrator | Wednesday 01 April 2026 00:51:45 +0000 (0:00:00.541) 0:02:53.116 ******* 2026-04-01 00:59:23.528573 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.528577 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.528581 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.528585 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:59:23.528589 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:59:23.528593 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:59:23.528597 | orchestrator | 2026-04-01 00:59:23.528601 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-01 00:59:23.528605 | orchestrator | Wednesday 01 April 2026 00:51:48 +0000 (0:00:02.974) 0:02:56.091 ******* 2026-04-01 00:59:23.528608 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:59:23.528612 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:59:23.528616 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:59:23.528622 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.528628 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.528634 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.528639 | orchestrator | 2026-04-01 00:59:23.528644 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-01 00:59:23.528650 | orchestrator | Wednesday 01 April 2026 00:51:49 +0000 (0:00:00.576) 0:02:56.667 ******* 2026-04-01 00:59:23.528656 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:59:23.528662 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:59:23.528667 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:59:23.528673 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.528678 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.528684 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.528690 | orchestrator | 2026-04-01 00:59:23.528695 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-01 00:59:23.528699 | orchestrator | Wednesday 01 April 2026 00:51:50 +0000 (0:00:00.806) 0:02:57.473 ******* 2026-04-01 00:59:23.528703 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.528707 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.528711 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.528715 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.528719 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.528723 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.528726 | orchestrator | 2026-04-01 00:59:23.528730 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-01 00:59:23.528752 | orchestrator | Wednesday 01 April 2026 00:51:51 +0000 (0:00:00.860) 0:02:58.334 ******* 2026-04-01 00:59:23.528756 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-01 00:59:23.528766 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-01 00:59:23.528775 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-01 00:59:23.528779 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.528810 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.528814 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.528818 | orchestrator | 2026-04-01 00:59:23.528822 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-01 00:59:23.528826 | orchestrator | Wednesday 01 April 2026 00:51:52 +0000 (0:00:01.170) 0:02:59.505 ******* 2026-04-01 00:59:23.528832 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-04-01 00:59:23.528838 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-04-01 00:59:23.528844 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.528848 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-04-01 00:59:23.528852 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-04-01 00:59:23.528856 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-04-01 00:59:23.528860 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-04-01 00:59:23.528864 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.528868 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.528873 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.528879 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.528885 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.528892 | orchestrator | 2026-04-01 00:59:23.528901 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-01 00:59:23.528909 | orchestrator | Wednesday 01 April 2026 00:51:53 +0000 (0:00:01.070) 0:03:00.575 ******* 2026-04-01 00:59:23.528914 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.528920 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.528925 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.528931 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.528937 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.528943 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.528953 | orchestrator | 2026-04-01 00:59:23.528959 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-01 00:59:23.528971 | orchestrator | Wednesday 01 April 2026 00:51:53 +0000 (0:00:00.633) 0:03:01.209 ******* 2026-04-01 00:59:23.528978 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.528984 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.528989 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.528995 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.529001 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.529006 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.529012 | orchestrator | 2026-04-01 00:59:23.529019 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-01 00:59:23.529025 | orchestrator | Wednesday 01 April 2026 00:51:54 +0000 (0:00:00.998) 0:03:02.207 ******* 2026-04-01 00:59:23.529029 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.529032 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.529036 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.529040 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.529044 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.529048 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.529052 | orchestrator | 2026-04-01 00:59:23.529055 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-01 00:59:23.529059 | orchestrator | Wednesday 01 April 2026 00:51:55 +0000 (0:00:00.741) 0:03:02.949 ******* 2026-04-01 00:59:23.529063 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.529067 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.529071 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.529075 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.529082 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.529087 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.529090 | orchestrator | 2026-04-01 00:59:23.529094 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-01 00:59:23.529117 | orchestrator | Wednesday 01 April 2026 00:51:56 +0000 (0:00:01.043) 0:03:03.993 ******* 2026-04-01 00:59:23.529122 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.529125 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.529129 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.529133 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.529137 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.529141 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.529145 | orchestrator | 2026-04-01 00:59:23.529149 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-01 00:59:23.529156 | orchestrator | Wednesday 01 April 2026 00:51:57 +0000 (0:00:00.607) 0:03:04.601 ******* 2026-04-01 00:59:23.529162 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:59:23.529170 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:59:23.529179 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.529185 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:59:23.529191 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.529197 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.529203 | orchestrator | 2026-04-01 00:59:23.529208 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-01 00:59:23.529214 | orchestrator | Wednesday 01 April 2026 00:51:58 +0000 (0:00:00.871) 0:03:05.473 ******* 2026-04-01 00:59:23.529220 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-01 00:59:23.529226 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-01 00:59:23.529231 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-01 00:59:23.529236 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.529242 | orchestrator | 2026-04-01 00:59:23.529248 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-01 00:59:23.529254 | orchestrator | Wednesday 01 April 2026 00:51:58 +0000 (0:00:00.427) 0:03:05.900 ******* 2026-04-01 00:59:23.529260 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-01 00:59:23.529271 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-01 00:59:23.529278 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-01 00:59:23.529284 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.529289 | orchestrator | 2026-04-01 00:59:23.529295 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-01 00:59:23.529301 | orchestrator | Wednesday 01 April 2026 00:51:58 +0000 (0:00:00.394) 0:03:06.295 ******* 2026-04-01 00:59:23.529307 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-01 00:59:23.529313 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-01 00:59:23.529319 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-01 00:59:23.529325 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.529331 | orchestrator | 2026-04-01 00:59:23.529336 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-01 00:59:23.529342 | orchestrator | Wednesday 01 April 2026 00:51:59 +0000 (0:00:00.400) 0:03:06.696 ******* 2026-04-01 00:59:23.529348 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:59:23.529353 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:59:23.529359 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:59:23.529365 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.529371 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.529376 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.529382 | orchestrator | 2026-04-01 00:59:23.529388 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-01 00:59:23.529393 | orchestrator | Wednesday 01 April 2026 00:52:00 +0000 (0:00:00.802) 0:03:07.498 ******* 2026-04-01 00:59:23.529399 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-01 00:59:23.529405 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-01 00:59:23.529411 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-04-01 00:59:23.529417 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.529423 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-01 00:59:23.529429 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-04-01 00:59:23.529435 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.529441 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-04-01 00:59:23.529447 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.529453 | orchestrator | 2026-04-01 00:59:23.529459 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-01 00:59:23.529465 | orchestrator | Wednesday 01 April 2026 00:52:02 +0000 (0:00:02.053) 0:03:09.552 ******* 2026-04-01 00:59:23.529471 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:59:23.529477 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:59:23.529483 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:59:23.529490 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:59:23.529495 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:59:23.529502 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:59:23.529508 | orchestrator | 2026-04-01 00:59:23.529514 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-01 00:59:23.529521 | orchestrator | Wednesday 01 April 2026 00:52:05 +0000 (0:00:03.070) 0:03:12.623 ******* 2026-04-01 00:59:23.529527 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:59:23.529533 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:59:23.529538 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:59:23.529544 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:59:23.529550 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:59:23.529556 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:59:23.529562 | orchestrator | 2026-04-01 00:59:23.529568 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-04-01 00:59:23.529574 | orchestrator | Wednesday 01 April 2026 00:52:06 +0000 (0:00:01.032) 0:03:13.655 ******* 2026-04-01 00:59:23.529580 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.529587 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.529600 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.529613 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:59:23.529620 | orchestrator | 2026-04-01 00:59:23.529626 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-04-01 00:59:23.529672 | orchestrator | Wednesday 01 April 2026 00:52:07 +0000 (0:00:00.857) 0:03:14.513 ******* 2026-04-01 00:59:23.529680 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:59:23.529686 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:59:23.529692 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:59:23.529698 | orchestrator | 2026-04-01 00:59:23.529703 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-04-01 00:59:23.529709 | orchestrator | Wednesday 01 April 2026 00:52:07 +0000 (0:00:00.279) 0:03:14.792 ******* 2026-04-01 00:59:23.529715 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:59:23.529721 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:59:23.529727 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:59:23.529781 | orchestrator | 2026-04-01 00:59:23.529789 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-04-01 00:59:23.529795 | orchestrator | Wednesday 01 April 2026 00:52:08 +0000 (0:00:01.489) 0:03:16.282 ******* 2026-04-01 00:59:23.529800 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-01 00:59:23.529807 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-01 00:59:23.529813 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-01 00:59:23.529819 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.529824 | orchestrator | 2026-04-01 00:59:23.529830 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-04-01 00:59:23.529836 | orchestrator | Wednesday 01 April 2026 00:52:09 +0000 (0:00:00.561) 0:03:16.844 ******* 2026-04-01 00:59:23.529841 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:59:23.529847 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:59:23.529853 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:59:23.529859 | orchestrator | 2026-04-01 00:59:23.529865 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-04-01 00:59:23.529870 | orchestrator | Wednesday 01 April 2026 00:52:09 +0000 (0:00:00.338) 0:03:17.182 ******* 2026-04-01 00:59:23.529876 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.529882 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.529887 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.529893 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:59:23.529899 | orchestrator | 2026-04-01 00:59:23.529904 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-04-01 00:59:23.529910 | orchestrator | Wednesday 01 April 2026 00:52:10 +0000 (0:00:00.884) 0:03:18.066 ******* 2026-04-01 00:59:23.529916 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-01 00:59:23.529922 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-01 00:59:23.529928 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-01 00:59:23.529933 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.529939 | orchestrator | 2026-04-01 00:59:23.529945 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-04-01 00:59:23.529952 | orchestrator | Wednesday 01 April 2026 00:52:11 +0000 (0:00:00.395) 0:03:18.462 ******* 2026-04-01 00:59:23.529958 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.529964 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.529969 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.529975 | orchestrator | 2026-04-01 00:59:23.529981 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-04-01 00:59:23.529987 | orchestrator | Wednesday 01 April 2026 00:52:11 +0000 (0:00:00.336) 0:03:18.799 ******* 2026-04-01 00:59:23.529994 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.530009 | orchestrator | 2026-04-01 00:59:23.530056 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-04-01 00:59:23.530064 | orchestrator | Wednesday 01 April 2026 00:52:11 +0000 (0:00:00.183) 0:03:18.982 ******* 2026-04-01 00:59:23.530071 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.530077 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.530084 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.530089 | orchestrator | 2026-04-01 00:59:23.530095 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-04-01 00:59:23.530101 | orchestrator | Wednesday 01 April 2026 00:52:11 +0000 (0:00:00.264) 0:03:19.246 ******* 2026-04-01 00:59:23.530106 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.530111 | orchestrator | 2026-04-01 00:59:23.530117 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-04-01 00:59:23.530123 | orchestrator | Wednesday 01 April 2026 00:52:12 +0000 (0:00:00.201) 0:03:19.448 ******* 2026-04-01 00:59:23.530129 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.530134 | orchestrator | 2026-04-01 00:59:23.530140 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-04-01 00:59:23.530147 | orchestrator | Wednesday 01 April 2026 00:52:12 +0000 (0:00:00.480) 0:03:19.929 ******* 2026-04-01 00:59:23.530153 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.530160 | orchestrator | 2026-04-01 00:59:23.530166 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-04-01 00:59:23.530171 | orchestrator | Wednesday 01 April 2026 00:52:12 +0000 (0:00:00.093) 0:03:20.022 ******* 2026-04-01 00:59:23.530177 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.530184 | orchestrator | 2026-04-01 00:59:23.530189 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-04-01 00:59:23.530195 | orchestrator | Wednesday 01 April 2026 00:52:12 +0000 (0:00:00.175) 0:03:20.198 ******* 2026-04-01 00:59:23.530201 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.530206 | orchestrator | 2026-04-01 00:59:23.530212 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-04-01 00:59:23.530218 | orchestrator | Wednesday 01 April 2026 00:52:13 +0000 (0:00:00.194) 0:03:20.393 ******* 2026-04-01 00:59:23.530224 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-01 00:59:23.530230 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-01 00:59:23.530236 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-01 00:59:23.530249 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.530256 | orchestrator | 2026-04-01 00:59:23.530261 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-04-01 00:59:23.530314 | orchestrator | Wednesday 01 April 2026 00:52:13 +0000 (0:00:00.353) 0:03:20.746 ******* 2026-04-01 00:59:23.530324 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.530330 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.530337 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.530343 | orchestrator | 2026-04-01 00:59:23.530349 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-04-01 00:59:23.530355 | orchestrator | Wednesday 01 April 2026 00:52:13 +0000 (0:00:00.298) 0:03:21.045 ******* 2026-04-01 00:59:23.530361 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.530367 | orchestrator | 2026-04-01 00:59:23.530373 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-04-01 00:59:23.530379 | orchestrator | Wednesday 01 April 2026 00:52:13 +0000 (0:00:00.188) 0:03:21.233 ******* 2026-04-01 00:59:23.530384 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.530390 | orchestrator | 2026-04-01 00:59:23.530396 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-04-01 00:59:23.530402 | orchestrator | Wednesday 01 April 2026 00:52:14 +0000 (0:00:00.192) 0:03:21.426 ******* 2026-04-01 00:59:23.530409 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.530414 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.530428 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.530436 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:59:23.530442 | orchestrator | 2026-04-01 00:59:23.530448 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-04-01 00:59:23.530455 | orchestrator | Wednesday 01 April 2026 00:52:15 +0000 (0:00:00.953) 0:03:22.379 ******* 2026-04-01 00:59:23.530461 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:59:23.530468 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:59:23.530474 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:59:23.530480 | orchestrator | 2026-04-01 00:59:23.530486 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-04-01 00:59:23.530491 | orchestrator | Wednesday 01 April 2026 00:52:15 +0000 (0:00:00.326) 0:03:22.705 ******* 2026-04-01 00:59:23.530498 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:59:23.530503 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:59:23.530509 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:59:23.530515 | orchestrator | 2026-04-01 00:59:23.530521 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-04-01 00:59:23.530526 | orchestrator | Wednesday 01 April 2026 00:52:17 +0000 (0:00:01.653) 0:03:24.359 ******* 2026-04-01 00:59:23.530532 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-01 00:59:23.530539 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-01 00:59:23.530543 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-01 00:59:23.530547 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.530551 | orchestrator | 2026-04-01 00:59:23.530555 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-04-01 00:59:23.530559 | orchestrator | Wednesday 01 April 2026 00:52:17 +0000 (0:00:00.584) 0:03:24.943 ******* 2026-04-01 00:59:23.530563 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:59:23.530567 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:59:23.530570 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:59:23.530574 | orchestrator | 2026-04-01 00:59:23.530578 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-04-01 00:59:23.530582 | orchestrator | Wednesday 01 April 2026 00:52:17 +0000 (0:00:00.283) 0:03:25.226 ******* 2026-04-01 00:59:23.530586 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.530590 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.530594 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.530597 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:59:23.530601 | orchestrator | 2026-04-01 00:59:23.530605 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-04-01 00:59:23.530609 | orchestrator | Wednesday 01 April 2026 00:52:18 +0000 (0:00:00.823) 0:03:26.050 ******* 2026-04-01 00:59:23.530613 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:59:23.530617 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:59:23.530621 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:59:23.530625 | orchestrator | 2026-04-01 00:59:23.530628 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-04-01 00:59:23.530632 | orchestrator | Wednesday 01 April 2026 00:52:19 +0000 (0:00:00.277) 0:03:26.328 ******* 2026-04-01 00:59:23.530636 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:59:23.530640 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:59:23.530644 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:59:23.530648 | orchestrator | 2026-04-01 00:59:23.530652 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-04-01 00:59:23.530655 | orchestrator | Wednesday 01 April 2026 00:52:20 +0000 (0:00:01.101) 0:03:27.429 ******* 2026-04-01 00:59:23.530659 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-01 00:59:23.530663 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-01 00:59:23.530678 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-01 00:59:23.530682 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.530686 | orchestrator | 2026-04-01 00:59:23.530690 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-04-01 00:59:23.530693 | orchestrator | Wednesday 01 April 2026 00:52:20 +0000 (0:00:00.741) 0:03:28.171 ******* 2026-04-01 00:59:23.530699 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:59:23.530705 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:59:23.530711 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:59:23.530717 | orchestrator | 2026-04-01 00:59:23.530723 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-04-01 00:59:23.530753 | orchestrator | Wednesday 01 April 2026 00:52:21 +0000 (0:00:00.337) 0:03:28.508 ******* 2026-04-01 00:59:23.530759 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.530773 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.530779 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.530785 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.530790 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.530835 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.530841 | orchestrator | 2026-04-01 00:59:23.530847 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-04-01 00:59:23.530852 | orchestrator | Wednesday 01 April 2026 00:52:21 +0000 (0:00:00.571) 0:03:29.079 ******* 2026-04-01 00:59:23.530858 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.530865 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.530871 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.530877 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:59:23.530883 | orchestrator | 2026-04-01 00:59:23.530888 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-04-01 00:59:23.530895 | orchestrator | Wednesday 01 April 2026 00:52:22 +0000 (0:00:00.850) 0:03:29.930 ******* 2026-04-01 00:59:23.530901 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:59:23.530907 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:59:23.530912 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:59:23.530918 | orchestrator | 2026-04-01 00:59:23.530924 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-04-01 00:59:23.530930 | orchestrator | Wednesday 01 April 2026 00:52:22 +0000 (0:00:00.285) 0:03:30.216 ******* 2026-04-01 00:59:23.530935 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:59:23.530942 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:59:23.530947 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:59:23.530953 | orchestrator | 2026-04-01 00:59:23.530959 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-04-01 00:59:23.530965 | orchestrator | Wednesday 01 April 2026 00:52:24 +0000 (0:00:01.182) 0:03:31.398 ******* 2026-04-01 00:59:23.530971 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-01 00:59:23.530976 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-01 00:59:23.530982 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-01 00:59:23.530988 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.530993 | orchestrator | 2026-04-01 00:59:23.530999 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-04-01 00:59:23.531005 | orchestrator | Wednesday 01 April 2026 00:52:24 +0000 (0:00:00.821) 0:03:32.220 ******* 2026-04-01 00:59:23.531012 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:59:23.531017 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:59:23.531022 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:59:23.531028 | orchestrator | 2026-04-01 00:59:23.531034 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-04-01 00:59:23.531040 | orchestrator | 2026-04-01 00:59:23.531046 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-01 00:59:23.531050 | orchestrator | Wednesday 01 April 2026 00:52:25 +0000 (0:00:00.837) 0:03:33.057 ******* 2026-04-01 00:59:23.531062 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:59:23.531067 | orchestrator | 2026-04-01 00:59:23.531071 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-01 00:59:23.531075 | orchestrator | Wednesday 01 April 2026 00:52:26 +0000 (0:00:00.509) 0:03:33.567 ******* 2026-04-01 00:59:23.531078 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:59:23.531082 | orchestrator | 2026-04-01 00:59:23.531086 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-01 00:59:23.531090 | orchestrator | Wednesday 01 April 2026 00:52:26 +0000 (0:00:00.711) 0:03:34.278 ******* 2026-04-01 00:59:23.531094 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:59:23.531098 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:59:23.531101 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:59:23.531105 | orchestrator | 2026-04-01 00:59:23.531109 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-01 00:59:23.531113 | orchestrator | Wednesday 01 April 2026 00:52:27 +0000 (0:00:00.736) 0:03:35.015 ******* 2026-04-01 00:59:23.531117 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.531120 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.531124 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.531128 | orchestrator | 2026-04-01 00:59:23.531132 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-01 00:59:23.531136 | orchestrator | Wednesday 01 April 2026 00:52:27 +0000 (0:00:00.293) 0:03:35.308 ******* 2026-04-01 00:59:23.531140 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.531143 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.531147 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.531151 | orchestrator | 2026-04-01 00:59:23.531155 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-01 00:59:23.531159 | orchestrator | Wednesday 01 April 2026 00:52:28 +0000 (0:00:00.299) 0:03:35.608 ******* 2026-04-01 00:59:23.531162 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.531166 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.531170 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.531174 | orchestrator | 2026-04-01 00:59:23.531178 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-01 00:59:23.531181 | orchestrator | Wednesday 01 April 2026 00:52:28 +0000 (0:00:00.310) 0:03:35.919 ******* 2026-04-01 00:59:23.531185 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:59:23.531189 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:59:23.531193 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:59:23.531197 | orchestrator | 2026-04-01 00:59:23.531200 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-01 00:59:23.531204 | orchestrator | Wednesday 01 April 2026 00:52:29 +0000 (0:00:01.042) 0:03:36.962 ******* 2026-04-01 00:59:23.531208 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.531212 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.531215 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.531220 | orchestrator | 2026-04-01 00:59:23.531232 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-01 00:59:23.531238 | orchestrator | Wednesday 01 April 2026 00:52:29 +0000 (0:00:00.324) 0:03:37.286 ******* 2026-04-01 00:59:23.531273 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.531280 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.531286 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.531292 | orchestrator | 2026-04-01 00:59:23.531297 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-01 00:59:23.531303 | orchestrator | Wednesday 01 April 2026 00:52:30 +0000 (0:00:00.302) 0:03:37.588 ******* 2026-04-01 00:59:23.531309 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:59:23.531315 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:59:23.531328 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:59:23.531334 | orchestrator | 2026-04-01 00:59:23.531340 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-01 00:59:23.531346 | orchestrator | Wednesday 01 April 2026 00:52:31 +0000 (0:00:00.785) 0:03:38.373 ******* 2026-04-01 00:59:23.531352 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:59:23.531358 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:59:23.531364 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:59:23.531369 | orchestrator | 2026-04-01 00:59:23.531375 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-01 00:59:23.531380 | orchestrator | Wednesday 01 April 2026 00:52:32 +0000 (0:00:00.990) 0:03:39.364 ******* 2026-04-01 00:59:23.531387 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.531393 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.531398 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.531405 | orchestrator | 2026-04-01 00:59:23.531411 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-01 00:59:23.531416 | orchestrator | Wednesday 01 April 2026 00:52:32 +0000 (0:00:00.310) 0:03:39.674 ******* 2026-04-01 00:59:23.531421 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:59:23.531427 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:59:23.531434 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:59:23.531440 | orchestrator | 2026-04-01 00:59:23.531446 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-01 00:59:23.531451 | orchestrator | Wednesday 01 April 2026 00:52:32 +0000 (0:00:00.325) 0:03:39.999 ******* 2026-04-01 00:59:23.531457 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.531463 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.531469 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.531474 | orchestrator | 2026-04-01 00:59:23.531479 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-01 00:59:23.531485 | orchestrator | Wednesday 01 April 2026 00:52:32 +0000 (0:00:00.264) 0:03:40.264 ******* 2026-04-01 00:59:23.531490 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.531496 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.531502 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.531508 | orchestrator | 2026-04-01 00:59:23.531514 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-01 00:59:23.531520 | orchestrator | Wednesday 01 April 2026 00:52:33 +0000 (0:00:00.290) 0:03:40.555 ******* 2026-04-01 00:59:23.531525 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.531531 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.531538 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.531544 | orchestrator | 2026-04-01 00:59:23.531549 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-01 00:59:23.531555 | orchestrator | Wednesday 01 April 2026 00:52:33 +0000 (0:00:00.545) 0:03:41.100 ******* 2026-04-01 00:59:23.531562 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.531568 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.531572 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.531576 | orchestrator | 2026-04-01 00:59:23.531580 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-01 00:59:23.531584 | orchestrator | Wednesday 01 April 2026 00:52:34 +0000 (0:00:00.291) 0:03:41.392 ******* 2026-04-01 00:59:23.531589 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.531595 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.531601 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.531606 | orchestrator | 2026-04-01 00:59:23.531611 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-01 00:59:23.531617 | orchestrator | Wednesday 01 April 2026 00:52:34 +0000 (0:00:00.305) 0:03:41.697 ******* 2026-04-01 00:59:23.531623 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:59:23.531629 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:59:23.531634 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:59:23.531649 | orchestrator | 2026-04-01 00:59:23.531657 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-01 00:59:23.531666 | orchestrator | Wednesday 01 April 2026 00:52:34 +0000 (0:00:00.321) 0:03:42.019 ******* 2026-04-01 00:59:23.531672 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:59:23.531680 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:59:23.531685 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:59:23.531691 | orchestrator | 2026-04-01 00:59:23.531697 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-01 00:59:23.531703 | orchestrator | Wednesday 01 April 2026 00:52:35 +0000 (0:00:00.665) 0:03:42.684 ******* 2026-04-01 00:59:23.531711 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:59:23.531715 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:59:23.531719 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:59:23.531723 | orchestrator | 2026-04-01 00:59:23.531727 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-04-01 00:59:23.531753 | orchestrator | Wednesday 01 April 2026 00:52:35 +0000 (0:00:00.556) 0:03:43.241 ******* 2026-04-01 00:59:23.531759 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:59:23.531765 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:59:23.531770 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:59:23.531776 | orchestrator | 2026-04-01 00:59:23.531782 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-04-01 00:59:23.531788 | orchestrator | Wednesday 01 April 2026 00:52:36 +0000 (0:00:00.307) 0:03:43.549 ******* 2026-04-01 00:59:23.531795 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:59:23.531802 | orchestrator | 2026-04-01 00:59:23.531807 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-04-01 00:59:23.531819 | orchestrator | Wednesday 01 April 2026 00:52:37 +0000 (0:00:00.771) 0:03:44.321 ******* 2026-04-01 00:59:23.531825 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.531830 | orchestrator | 2026-04-01 00:59:23.531873 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-04-01 00:59:23.531881 | orchestrator | Wednesday 01 April 2026 00:52:37 +0000 (0:00:00.167) 0:03:44.488 ******* 2026-04-01 00:59:23.531887 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-01 00:59:23.531893 | orchestrator | 2026-04-01 00:59:23.531898 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-04-01 00:59:23.531904 | orchestrator | Wednesday 01 April 2026 00:52:38 +0000 (0:00:01.053) 0:03:45.542 ******* 2026-04-01 00:59:23.531909 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:59:23.531916 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:59:23.531923 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:59:23.531927 | orchestrator | 2026-04-01 00:59:23.531931 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-04-01 00:59:23.531937 | orchestrator | Wednesday 01 April 2026 00:52:38 +0000 (0:00:00.309) 0:03:45.851 ******* 2026-04-01 00:59:23.531943 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:59:23.531948 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:59:23.531954 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:59:23.531960 | orchestrator | 2026-04-01 00:59:23.531966 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-04-01 00:59:23.531972 | orchestrator | Wednesday 01 April 2026 00:52:38 +0000 (0:00:00.319) 0:03:46.170 ******* 2026-04-01 00:59:23.531978 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:59:23.531984 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:59:23.531990 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:59:23.531995 | orchestrator | 2026-04-01 00:59:23.532001 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-04-01 00:59:23.532007 | orchestrator | Wednesday 01 April 2026 00:52:40 +0000 (0:00:01.456) 0:03:47.627 ******* 2026-04-01 00:59:23.532011 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:59:23.532015 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:59:23.532018 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:59:23.532028 | orchestrator | 2026-04-01 00:59:23.532032 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-04-01 00:59:23.532038 | orchestrator | Wednesday 01 April 2026 00:52:41 +0000 (0:00:00.789) 0:03:48.417 ******* 2026-04-01 00:59:23.532044 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:59:23.532049 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:59:23.532055 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:59:23.532060 | orchestrator | 2026-04-01 00:59:23.532065 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-04-01 00:59:23.532070 | orchestrator | Wednesday 01 April 2026 00:52:41 +0000 (0:00:00.710) 0:03:49.128 ******* 2026-04-01 00:59:23.532075 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:59:23.532083 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:59:23.532091 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:59:23.532098 | orchestrator | 2026-04-01 00:59:23.532104 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-04-01 00:59:23.532110 | orchestrator | Wednesday 01 April 2026 00:52:42 +0000 (0:00:00.693) 0:03:49.821 ******* 2026-04-01 00:59:23.532115 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:59:23.532121 | orchestrator | 2026-04-01 00:59:23.532127 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-04-01 00:59:23.532132 | orchestrator | Wednesday 01 April 2026 00:52:43 +0000 (0:00:01.381) 0:03:51.202 ******* 2026-04-01 00:59:23.532138 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:59:23.532143 | orchestrator | 2026-04-01 00:59:23.532149 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-04-01 00:59:23.532155 | orchestrator | Wednesday 01 April 2026 00:52:44 +0000 (0:00:00.915) 0:03:52.118 ******* 2026-04-01 00:59:23.532160 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-01 00:59:23.532166 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-01 00:59:23.532172 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-01 00:59:23.532178 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-01 00:59:23.532184 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-01 00:59:23.532190 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-04-01 00:59:23.532195 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-01 00:59:23.532201 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2026-04-01 00:59:23.532207 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-04-01 00:59:23.532213 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-04-01 00:59:23.532218 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-01 00:59:23.532224 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-04-01 00:59:23.532229 | orchestrator | 2026-04-01 00:59:23.532235 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-04-01 00:59:23.532240 | orchestrator | Wednesday 01 April 2026 00:52:48 +0000 (0:00:03.493) 0:03:55.611 ******* 2026-04-01 00:59:23.532245 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:59:23.532251 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:59:23.532257 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:59:23.532262 | orchestrator | 2026-04-01 00:59:23.532268 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-04-01 00:59:23.532273 | orchestrator | Wednesday 01 April 2026 00:52:49 +0000 (0:00:01.135) 0:03:56.746 ******* 2026-04-01 00:59:23.532278 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:59:23.532283 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:59:23.532289 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:59:23.532295 | orchestrator | 2026-04-01 00:59:23.532301 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-04-01 00:59:23.532306 | orchestrator | Wednesday 01 April 2026 00:52:49 +0000 (0:00:00.355) 0:03:57.101 ******* 2026-04-01 00:59:23.532311 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:59:23.532326 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:59:23.532331 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:59:23.532337 | orchestrator | 2026-04-01 00:59:23.532347 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-04-01 00:59:23.532353 | orchestrator | Wednesday 01 April 2026 00:52:50 +0000 (0:00:00.312) 0:03:57.414 ******* 2026-04-01 00:59:23.532358 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:59:23.532398 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:59:23.532404 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:59:23.532410 | orchestrator | 2026-04-01 00:59:23.532416 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-04-01 00:59:23.532424 | orchestrator | Wednesday 01 April 2026 00:52:52 +0000 (0:00:02.348) 0:03:59.762 ******* 2026-04-01 00:59:23.532429 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:59:23.532435 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:59:23.532441 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:59:23.532447 | orchestrator | 2026-04-01 00:59:23.532452 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-04-01 00:59:23.532458 | orchestrator | Wednesday 01 April 2026 00:52:53 +0000 (0:00:01.233) 0:04:00.996 ******* 2026-04-01 00:59:23.532464 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.532471 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.532475 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.532479 | orchestrator | 2026-04-01 00:59:23.532483 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-04-01 00:59:23.532487 | orchestrator | Wednesday 01 April 2026 00:52:54 +0000 (0:00:00.353) 0:04:01.349 ******* 2026-04-01 00:59:23.532491 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:59:23.532498 | orchestrator | 2026-04-01 00:59:23.532503 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-04-01 00:59:23.532509 | orchestrator | Wednesday 01 April 2026 00:52:54 +0000 (0:00:00.721) 0:04:02.070 ******* 2026-04-01 00:59:23.532516 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.532521 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.532525 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.532529 | orchestrator | 2026-04-01 00:59:23.532532 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-04-01 00:59:23.532536 | orchestrator | Wednesday 01 April 2026 00:52:55 +0000 (0:00:00.305) 0:04:02.375 ******* 2026-04-01 00:59:23.532540 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.532544 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.532547 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.532551 | orchestrator | 2026-04-01 00:59:23.532555 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-04-01 00:59:23.532559 | orchestrator | Wednesday 01 April 2026 00:52:55 +0000 (0:00:00.293) 0:04:02.669 ******* 2026-04-01 00:59:23.532563 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:59:23.532567 | orchestrator | 2026-04-01 00:59:23.532571 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-04-01 00:59:23.532575 | orchestrator | Wednesday 01 April 2026 00:52:55 +0000 (0:00:00.515) 0:04:03.185 ******* 2026-04-01 00:59:23.532579 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:59:23.532584 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:59:23.532589 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:59:23.532595 | orchestrator | 2026-04-01 00:59:23.532601 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-04-01 00:59:23.532610 | orchestrator | Wednesday 01 April 2026 00:52:57 +0000 (0:00:01.957) 0:04:05.143 ******* 2026-04-01 00:59:23.532617 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:59:23.532623 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:59:23.532628 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:59:23.532635 | orchestrator | 2026-04-01 00:59:23.532640 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-04-01 00:59:23.532654 | orchestrator | Wednesday 01 April 2026 00:52:59 +0000 (0:00:01.388) 0:04:06.532 ******* 2026-04-01 00:59:23.532660 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:59:23.532665 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:59:23.532671 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:59:23.532677 | orchestrator | 2026-04-01 00:59:23.532683 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-04-01 00:59:23.532689 | orchestrator | Wednesday 01 April 2026 00:53:01 +0000 (0:00:01.950) 0:04:08.482 ******* 2026-04-01 00:59:23.532695 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:59:23.532705 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:59:23.532711 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:59:23.532718 | orchestrator | 2026-04-01 00:59:23.532723 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-04-01 00:59:23.532729 | orchestrator | Wednesday 01 April 2026 00:53:03 +0000 (0:00:02.039) 0:04:10.522 ******* 2026-04-01 00:59:23.532780 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:59:23.532788 | orchestrator | 2026-04-01 00:59:23.532795 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-04-01 00:59:23.532804 | orchestrator | Wednesday 01 April 2026 00:53:04 +0000 (0:00:00.829) 0:04:11.351 ******* 2026-04-01 00:59:23.532811 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-04-01 00:59:23.532816 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:59:23.532822 | orchestrator | 2026-04-01 00:59:23.532827 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-04-01 00:59:23.532833 | orchestrator | Wednesday 01 April 2026 00:53:25 +0000 (0:00:21.656) 0:04:33.008 ******* 2026-04-01 00:59:23.532839 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:59:23.532845 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:59:23.532851 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:59:23.532857 | orchestrator | 2026-04-01 00:59:23.532863 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-04-01 00:59:23.532869 | orchestrator | Wednesday 01 April 2026 00:53:34 +0000 (0:00:08.312) 0:04:41.320 ******* 2026-04-01 00:59:23.532874 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.532880 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.532892 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.532901 | orchestrator | 2026-04-01 00:59:23.532909 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-04-01 00:59:23.532950 | orchestrator | Wednesday 01 April 2026 00:53:34 +0000 (0:00:00.363) 0:04:41.684 ******* 2026-04-01 00:59:23.532963 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__26898957db289102fbf3eb71452733ae9e7f5e6d'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-04-01 00:59:23.532971 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__26898957db289102fbf3eb71452733ae9e7f5e6d'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-04-01 00:59:23.532979 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__26898957db289102fbf3eb71452733ae9e7f5e6d'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-04-01 00:59:23.532987 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__26898957db289102fbf3eb71452733ae9e7f5e6d'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-04-01 00:59:23.533003 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__26898957db289102fbf3eb71452733ae9e7f5e6d'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-04-01 00:59:23.533010 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__26898957db289102fbf3eb71452733ae9e7f5e6d'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__26898957db289102fbf3eb71452733ae9e7f5e6d'}])  2026-04-01 00:59:23.533019 | orchestrator | 2026-04-01 00:59:23.533024 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-01 00:59:23.533030 | orchestrator | Wednesday 01 April 2026 00:53:48 +0000 (0:00:14.056) 0:04:55.741 ******* 2026-04-01 00:59:23.533036 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.533042 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.533047 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.533053 | orchestrator | 2026-04-01 00:59:23.533060 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-04-01 00:59:23.533066 | orchestrator | Wednesday 01 April 2026 00:53:48 +0000 (0:00:00.305) 0:04:56.046 ******* 2026-04-01 00:59:23.533071 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:59:23.533077 | orchestrator | 2026-04-01 00:59:23.533082 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-04-01 00:59:23.533088 | orchestrator | Wednesday 01 April 2026 00:53:49 +0000 (0:00:00.672) 0:04:56.718 ******* 2026-04-01 00:59:23.533093 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:59:23.533098 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:59:23.533104 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:59:23.533110 | orchestrator | 2026-04-01 00:59:23.533115 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-04-01 00:59:23.533121 | orchestrator | Wednesday 01 April 2026 00:53:49 +0000 (0:00:00.325) 0:04:57.043 ******* 2026-04-01 00:59:23.533126 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.533131 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.533138 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.533148 | orchestrator | 2026-04-01 00:59:23.533156 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-04-01 00:59:23.533164 | orchestrator | Wednesday 01 April 2026 00:53:50 +0000 (0:00:00.312) 0:04:57.356 ******* 2026-04-01 00:59:23.533172 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-01 00:59:23.533181 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-01 00:59:23.533189 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-01 00:59:23.533196 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.533204 | orchestrator | 2026-04-01 00:59:23.533214 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-04-01 00:59:23.533221 | orchestrator | Wednesday 01 April 2026 00:53:50 +0000 (0:00:00.837) 0:04:58.194 ******* 2026-04-01 00:59:23.533228 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:59:23.533234 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:59:23.533267 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:59:23.533274 | orchestrator | 2026-04-01 00:59:23.533280 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-04-01 00:59:23.533292 | orchestrator | 2026-04-01 00:59:23.533298 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-01 00:59:23.533304 | orchestrator | Wednesday 01 April 2026 00:53:51 +0000 (0:00:00.775) 0:04:58.969 ******* 2026-04-01 00:59:23.533311 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:59:23.533316 | orchestrator | 2026-04-01 00:59:23.533320 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-01 00:59:23.533324 | orchestrator | Wednesday 01 April 2026 00:53:52 +0000 (0:00:00.480) 0:04:59.450 ******* 2026-04-01 00:59:23.533328 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:59:23.533332 | orchestrator | 2026-04-01 00:59:23.533336 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-01 00:59:23.533340 | orchestrator | Wednesday 01 April 2026 00:53:52 +0000 (0:00:00.713) 0:05:00.163 ******* 2026-04-01 00:59:23.533344 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:59:23.533348 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:59:23.533351 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:59:23.533355 | orchestrator | 2026-04-01 00:59:23.533359 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-01 00:59:23.533364 | orchestrator | Wednesday 01 April 2026 00:53:53 +0000 (0:00:00.814) 0:05:00.977 ******* 2026-04-01 00:59:23.533369 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.533375 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.533381 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.533390 | orchestrator | 2026-04-01 00:59:23.533399 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-01 00:59:23.533404 | orchestrator | Wednesday 01 April 2026 00:53:53 +0000 (0:00:00.285) 0:05:01.262 ******* 2026-04-01 00:59:23.533411 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.533417 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.533423 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.533429 | orchestrator | 2026-04-01 00:59:23.533434 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-01 00:59:23.533440 | orchestrator | Wednesday 01 April 2026 00:53:54 +0000 (0:00:00.384) 0:05:01.647 ******* 2026-04-01 00:59:23.533445 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.533451 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.533457 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.533463 | orchestrator | 2026-04-01 00:59:23.533470 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-01 00:59:23.533475 | orchestrator | Wednesday 01 April 2026 00:53:54 +0000 (0:00:00.555) 0:05:02.202 ******* 2026-04-01 00:59:23.533479 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:59:23.533483 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:59:23.533486 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:59:23.533490 | orchestrator | 2026-04-01 00:59:23.533494 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-01 00:59:23.533498 | orchestrator | Wednesday 01 April 2026 00:53:55 +0000 (0:00:00.765) 0:05:02.968 ******* 2026-04-01 00:59:23.533502 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.533506 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.533509 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.533513 | orchestrator | 2026-04-01 00:59:23.533517 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-01 00:59:23.533522 | orchestrator | Wednesday 01 April 2026 00:53:55 +0000 (0:00:00.296) 0:05:03.264 ******* 2026-04-01 00:59:23.533528 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.533534 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.533540 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.533549 | orchestrator | 2026-04-01 00:59:23.533556 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-01 00:59:23.533575 | orchestrator | Wednesday 01 April 2026 00:53:56 +0000 (0:00:00.306) 0:05:03.571 ******* 2026-04-01 00:59:23.533581 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:59:23.533587 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:59:23.533593 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:59:23.533599 | orchestrator | 2026-04-01 00:59:23.533605 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-01 00:59:23.533611 | orchestrator | Wednesday 01 April 2026 00:53:57 +0000 (0:00:00.745) 0:05:04.316 ******* 2026-04-01 00:59:23.533616 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:59:23.533621 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:59:23.533627 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:59:23.533632 | orchestrator | 2026-04-01 00:59:23.533638 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-01 00:59:23.533643 | orchestrator | Wednesday 01 April 2026 00:53:58 +0000 (0:00:01.004) 0:05:05.320 ******* 2026-04-01 00:59:23.533650 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.533655 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.533661 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.533667 | orchestrator | 2026-04-01 00:59:23.533672 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-01 00:59:23.533678 | orchestrator | Wednesday 01 April 2026 00:53:58 +0000 (0:00:00.280) 0:05:05.601 ******* 2026-04-01 00:59:23.533684 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:59:23.533689 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:59:23.533695 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:59:23.533701 | orchestrator | 2026-04-01 00:59:23.533706 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-01 00:59:23.533712 | orchestrator | Wednesday 01 April 2026 00:53:58 +0000 (0:00:00.344) 0:05:05.946 ******* 2026-04-01 00:59:23.533717 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.533723 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.533750 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.533757 | orchestrator | 2026-04-01 00:59:23.533763 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-01 00:59:23.533800 | orchestrator | Wednesday 01 April 2026 00:53:58 +0000 (0:00:00.270) 0:05:06.217 ******* 2026-04-01 00:59:23.533808 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.533814 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.533820 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.533827 | orchestrator | 2026-04-01 00:59:23.533832 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-01 00:59:23.533838 | orchestrator | Wednesday 01 April 2026 00:53:59 +0000 (0:00:00.568) 0:05:06.785 ******* 2026-04-01 00:59:23.533844 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.533850 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.533856 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.533862 | orchestrator | 2026-04-01 00:59:23.533867 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-01 00:59:23.533874 | orchestrator | Wednesday 01 April 2026 00:53:59 +0000 (0:00:00.305) 0:05:07.090 ******* 2026-04-01 00:59:23.533879 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.533885 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.533891 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.533897 | orchestrator | 2026-04-01 00:59:23.533903 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-01 00:59:23.533908 | orchestrator | Wednesday 01 April 2026 00:54:00 +0000 (0:00:00.282) 0:05:07.373 ******* 2026-04-01 00:59:23.533914 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.533919 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.533925 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.533931 | orchestrator | 2026-04-01 00:59:23.533937 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-01 00:59:23.533943 | orchestrator | Wednesday 01 April 2026 00:54:00 +0000 (0:00:00.362) 0:05:07.736 ******* 2026-04-01 00:59:23.533955 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:59:23.533961 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:59:23.533966 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:59:23.533972 | orchestrator | 2026-04-01 00:59:23.533978 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-01 00:59:23.533984 | orchestrator | Wednesday 01 April 2026 00:54:01 +0000 (0:00:00.624) 0:05:08.360 ******* 2026-04-01 00:59:23.533991 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:59:23.533997 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:59:23.534003 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:59:23.534009 | orchestrator | 2026-04-01 00:59:23.534056 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-01 00:59:23.534063 | orchestrator | Wednesday 01 April 2026 00:54:01 +0000 (0:00:00.325) 0:05:08.686 ******* 2026-04-01 00:59:23.534069 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:59:23.534075 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:59:23.534081 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:59:23.534087 | orchestrator | 2026-04-01 00:59:23.534094 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-04-01 00:59:23.534100 | orchestrator | Wednesday 01 April 2026 00:54:01 +0000 (0:00:00.501) 0:05:09.187 ******* 2026-04-01 00:59:23.534107 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-01 00:59:23.534113 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-01 00:59:23.534121 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-01 00:59:23.534128 | orchestrator | 2026-04-01 00:59:23.534135 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-04-01 00:59:23.534139 | orchestrator | Wednesday 01 April 2026 00:54:02 +0000 (0:00:00.885) 0:05:10.073 ******* 2026-04-01 00:59:23.534145 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:59:23.534153 | orchestrator | 2026-04-01 00:59:23.534162 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-04-01 00:59:23.534167 | orchestrator | Wednesday 01 April 2026 00:54:03 +0000 (0:00:00.749) 0:05:10.823 ******* 2026-04-01 00:59:23.534173 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:59:23.534178 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:59:23.534185 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:59:23.534191 | orchestrator | 2026-04-01 00:59:23.534196 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-04-01 00:59:23.534203 | orchestrator | Wednesday 01 April 2026 00:54:04 +0000 (0:00:00.841) 0:05:11.664 ******* 2026-04-01 00:59:23.534208 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.534214 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.534219 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.534225 | orchestrator | 2026-04-01 00:59:23.534231 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-04-01 00:59:23.534236 | orchestrator | Wednesday 01 April 2026 00:54:04 +0000 (0:00:00.321) 0:05:11.986 ******* 2026-04-01 00:59:23.534243 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-01 00:59:23.534250 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-01 00:59:23.534254 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-01 00:59:23.534258 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-04-01 00:59:23.534263 | orchestrator | 2026-04-01 00:59:23.534266 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-04-01 00:59:23.534270 | orchestrator | Wednesday 01 April 2026 00:54:14 +0000 (0:00:10.282) 0:05:22.268 ******* 2026-04-01 00:59:23.534274 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:59:23.534278 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:59:23.534282 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:59:23.534286 | orchestrator | 2026-04-01 00:59:23.534292 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-04-01 00:59:23.534306 | orchestrator | Wednesday 01 April 2026 00:54:15 +0000 (0:00:00.567) 0:05:22.835 ******* 2026-04-01 00:59:23.534315 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-01 00:59:23.534322 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-01 00:59:23.534328 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-04-01 00:59:23.534339 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-01 00:59:23.534345 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-01 00:59:23.534390 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-04-01 00:59:23.534398 | orchestrator | 2026-04-01 00:59:23.534403 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-04-01 00:59:23.534409 | orchestrator | Wednesday 01 April 2026 00:54:17 +0000 (0:00:02.100) 0:05:24.936 ******* 2026-04-01 00:59:23.534415 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-04-01 00:59:23.534420 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-01 00:59:23.534425 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-01 00:59:23.534430 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-01 00:59:23.534436 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-04-01 00:59:23.534441 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-04-01 00:59:23.534447 | orchestrator | 2026-04-01 00:59:23.534452 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-04-01 00:59:23.534457 | orchestrator | Wednesday 01 April 2026 00:54:19 +0000 (0:00:01.377) 0:05:26.313 ******* 2026-04-01 00:59:23.534462 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:59:23.534467 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:59:23.534473 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:59:23.534479 | orchestrator | 2026-04-01 00:59:23.534484 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-04-01 00:59:23.534489 | orchestrator | Wednesday 01 April 2026 00:54:19 +0000 (0:00:00.728) 0:05:27.042 ******* 2026-04-01 00:59:23.534495 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.534500 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.534505 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.534511 | orchestrator | 2026-04-01 00:59:23.534516 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-04-01 00:59:23.534522 | orchestrator | Wednesday 01 April 2026 00:54:20 +0000 (0:00:00.440) 0:05:27.483 ******* 2026-04-01 00:59:23.534527 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.534532 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.534538 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.534543 | orchestrator | 2026-04-01 00:59:23.534548 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-04-01 00:59:23.534553 | orchestrator | Wednesday 01 April 2026 00:54:20 +0000 (0:00:00.270) 0:05:27.754 ******* 2026-04-01 00:59:23.534560 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:59:23.534566 | orchestrator | 2026-04-01 00:59:23.534572 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-04-01 00:59:23.534578 | orchestrator | Wednesday 01 April 2026 00:54:20 +0000 (0:00:00.438) 0:05:28.192 ******* 2026-04-01 00:59:23.534584 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.534589 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.534594 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.534599 | orchestrator | 2026-04-01 00:59:23.534606 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-04-01 00:59:23.534612 | orchestrator | Wednesday 01 April 2026 00:54:21 +0000 (0:00:00.304) 0:05:28.497 ******* 2026-04-01 00:59:23.534617 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.534622 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.534627 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.534640 | orchestrator | 2026-04-01 00:59:23.534647 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-04-01 00:59:23.534653 | orchestrator | Wednesday 01 April 2026 00:54:21 +0000 (0:00:00.421) 0:05:28.919 ******* 2026-04-01 00:59:23.534658 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:59:23.534664 | orchestrator | 2026-04-01 00:59:23.534669 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-04-01 00:59:23.534675 | orchestrator | Wednesday 01 April 2026 00:54:22 +0000 (0:00:00.451) 0:05:29.370 ******* 2026-04-01 00:59:23.534681 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:59:23.534686 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:59:23.534692 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:59:23.534697 | orchestrator | 2026-04-01 00:59:23.534703 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-04-01 00:59:23.534710 | orchestrator | Wednesday 01 April 2026 00:54:23 +0000 (0:00:01.283) 0:05:30.654 ******* 2026-04-01 00:59:23.534716 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:59:23.534722 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:59:23.534729 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:59:23.534753 | orchestrator | 2026-04-01 00:59:23.534759 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-04-01 00:59:23.534765 | orchestrator | Wednesday 01 April 2026 00:54:24 +0000 (0:00:01.389) 0:05:32.043 ******* 2026-04-01 00:59:23.534771 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:59:23.534777 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:59:23.534783 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:59:23.534789 | orchestrator | 2026-04-01 00:59:23.534795 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-04-01 00:59:23.534800 | orchestrator | Wednesday 01 April 2026 00:54:26 +0000 (0:00:01.773) 0:05:33.817 ******* 2026-04-01 00:59:23.534806 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:59:23.534811 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:59:23.534818 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:59:23.534824 | orchestrator | 2026-04-01 00:59:23.534830 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-04-01 00:59:23.534836 | orchestrator | Wednesday 01 April 2026 00:54:28 +0000 (0:00:01.744) 0:05:35.561 ******* 2026-04-01 00:59:23.534842 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.534848 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.534854 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-04-01 00:59:23.534861 | orchestrator | 2026-04-01 00:59:23.534873 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-04-01 00:59:23.534879 | orchestrator | Wednesday 01 April 2026 00:54:28 +0000 (0:00:00.377) 0:05:35.939 ******* 2026-04-01 00:59:23.534916 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-04-01 00:59:23.534923 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-04-01 00:59:23.534928 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2026-04-01 00:59:23.534935 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2026-04-01 00:59:23.534941 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-04-01 00:59:23.534947 | orchestrator | 2026-04-01 00:59:23.534952 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-04-01 00:59:23.534959 | orchestrator | Wednesday 01 April 2026 00:54:52 +0000 (0:00:24.236) 0:06:00.175 ******* 2026-04-01 00:59:23.534965 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-04-01 00:59:23.534971 | orchestrator | 2026-04-01 00:59:23.534977 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-04-01 00:59:23.534983 | orchestrator | Wednesday 01 April 2026 00:54:54 +0000 (0:00:01.594) 0:06:01.770 ******* 2026-04-01 00:59:23.534996 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:59:23.535000 | orchestrator | 2026-04-01 00:59:23.535004 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-04-01 00:59:23.535007 | orchestrator | Wednesday 01 April 2026 00:54:54 +0000 (0:00:00.276) 0:06:02.046 ******* 2026-04-01 00:59:23.535011 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:59:23.535015 | orchestrator | 2026-04-01 00:59:23.535019 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-04-01 00:59:23.535023 | orchestrator | Wednesday 01 April 2026 00:54:54 +0000 (0:00:00.125) 0:06:02.172 ******* 2026-04-01 00:59:23.535026 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-04-01 00:59:23.535030 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-04-01 00:59:23.535034 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-04-01 00:59:23.535038 | orchestrator | 2026-04-01 00:59:23.535042 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-04-01 00:59:23.535046 | orchestrator | Wednesday 01 April 2026 00:55:01 +0000 (0:00:06.205) 0:06:08.377 ******* 2026-04-01 00:59:23.535049 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-04-01 00:59:23.535053 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-04-01 00:59:23.535057 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-04-01 00:59:23.535061 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-04-01 00:59:23.535065 | orchestrator | 2026-04-01 00:59:23.535069 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-01 00:59:23.535073 | orchestrator | Wednesday 01 April 2026 00:55:05 +0000 (0:00:04.594) 0:06:12.972 ******* 2026-04-01 00:59:23.535077 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:59:23.535082 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:59:23.535088 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:59:23.535094 | orchestrator | 2026-04-01 00:59:23.535099 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-04-01 00:59:23.535107 | orchestrator | Wednesday 01 April 2026 00:55:06 +0000 (0:00:00.916) 0:06:13.888 ******* 2026-04-01 00:59:23.535115 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:59:23.535120 | orchestrator | 2026-04-01 00:59:23.535126 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-04-01 00:59:23.535131 | orchestrator | Wednesday 01 April 2026 00:55:07 +0000 (0:00:00.439) 0:06:14.328 ******* 2026-04-01 00:59:23.535137 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:59:23.535143 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:59:23.535148 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:59:23.535154 | orchestrator | 2026-04-01 00:59:23.535159 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-04-01 00:59:23.535165 | orchestrator | Wednesday 01 April 2026 00:55:07 +0000 (0:00:00.248) 0:06:14.577 ******* 2026-04-01 00:59:23.535171 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:59:23.535177 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:59:23.535183 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:59:23.535189 | orchestrator | 2026-04-01 00:59:23.535195 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-04-01 00:59:23.535200 | orchestrator | Wednesday 01 April 2026 00:55:08 +0000 (0:00:01.338) 0:06:15.916 ******* 2026-04-01 00:59:23.535205 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-01 00:59:23.535211 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-01 00:59:23.535216 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-01 00:59:23.535222 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.535228 | orchestrator | 2026-04-01 00:59:23.535234 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-04-01 00:59:23.535246 | orchestrator | Wednesday 01 April 2026 00:55:09 +0000 (0:00:00.550) 0:06:16.466 ******* 2026-04-01 00:59:23.535251 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:59:23.535257 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:59:23.535264 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:59:23.535269 | orchestrator | 2026-04-01 00:59:23.535275 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-04-01 00:59:23.535282 | orchestrator | 2026-04-01 00:59:23.535287 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-01 00:59:23.535299 | orchestrator | Wednesday 01 April 2026 00:55:09 +0000 (0:00:00.496) 0:06:16.963 ******* 2026-04-01 00:59:23.535304 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:59:23.535311 | orchestrator | 2026-04-01 00:59:23.535350 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-01 00:59:23.535358 | orchestrator | Wednesday 01 April 2026 00:55:10 +0000 (0:00:00.590) 0:06:17.553 ******* 2026-04-01 00:59:23.535364 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:59:23.535370 | orchestrator | 2026-04-01 00:59:23.535377 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-01 00:59:23.535383 | orchestrator | Wednesday 01 April 2026 00:55:10 +0000 (0:00:00.447) 0:06:18.001 ******* 2026-04-01 00:59:23.535389 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.535396 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.535401 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.535406 | orchestrator | 2026-04-01 00:59:23.535410 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-01 00:59:23.535415 | orchestrator | Wednesday 01 April 2026 00:55:10 +0000 (0:00:00.260) 0:06:18.261 ******* 2026-04-01 00:59:23.535419 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:59:23.535424 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:59:23.535428 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:59:23.535433 | orchestrator | 2026-04-01 00:59:23.535437 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-01 00:59:23.535442 | orchestrator | Wednesday 01 April 2026 00:55:11 +0000 (0:00:00.787) 0:06:19.049 ******* 2026-04-01 00:59:23.535446 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:59:23.535450 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:59:23.535455 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:59:23.535459 | orchestrator | 2026-04-01 00:59:23.535464 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-01 00:59:23.535468 | orchestrator | Wednesday 01 April 2026 00:55:12 +0000 (0:00:00.687) 0:06:19.737 ******* 2026-04-01 00:59:23.535473 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:59:23.535477 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:59:23.535481 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:59:23.535486 | orchestrator | 2026-04-01 00:59:23.535490 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-01 00:59:23.535495 | orchestrator | Wednesday 01 April 2026 00:55:13 +0000 (0:00:00.629) 0:06:20.366 ******* 2026-04-01 00:59:23.535499 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.535504 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.535508 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.535513 | orchestrator | 2026-04-01 00:59:23.535517 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-01 00:59:23.535522 | orchestrator | Wednesday 01 April 2026 00:55:13 +0000 (0:00:00.292) 0:06:20.659 ******* 2026-04-01 00:59:23.535526 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.535531 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.535535 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.535541 | orchestrator | 2026-04-01 00:59:23.535547 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-01 00:59:23.535560 | orchestrator | Wednesday 01 April 2026 00:55:13 +0000 (0:00:00.416) 0:06:21.075 ******* 2026-04-01 00:59:23.535564 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.535569 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.535573 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.535578 | orchestrator | 2026-04-01 00:59:23.535582 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-01 00:59:23.535586 | orchestrator | Wednesday 01 April 2026 00:55:14 +0000 (0:00:00.283) 0:06:21.359 ******* 2026-04-01 00:59:23.535591 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:59:23.535595 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:59:23.535600 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:59:23.535604 | orchestrator | 2026-04-01 00:59:23.535609 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-01 00:59:23.535613 | orchestrator | Wednesday 01 April 2026 00:55:14 +0000 (0:00:00.637) 0:06:21.996 ******* 2026-04-01 00:59:23.535618 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:59:23.535622 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:59:23.535627 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:59:23.535631 | orchestrator | 2026-04-01 00:59:23.535636 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-01 00:59:23.535640 | orchestrator | Wednesday 01 April 2026 00:55:15 +0000 (0:00:00.678) 0:06:22.674 ******* 2026-04-01 00:59:23.535645 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.535649 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.535654 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.535658 | orchestrator | 2026-04-01 00:59:23.535662 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-01 00:59:23.535666 | orchestrator | Wednesday 01 April 2026 00:55:15 +0000 (0:00:00.416) 0:06:23.091 ******* 2026-04-01 00:59:23.535670 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.535674 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.535677 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.535681 | orchestrator | 2026-04-01 00:59:23.535685 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-01 00:59:23.535689 | orchestrator | Wednesday 01 April 2026 00:55:16 +0000 (0:00:00.296) 0:06:23.388 ******* 2026-04-01 00:59:23.535693 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:59:23.535697 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:59:23.535703 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:59:23.535709 | orchestrator | 2026-04-01 00:59:23.535714 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-01 00:59:23.535719 | orchestrator | Wednesday 01 April 2026 00:55:16 +0000 (0:00:00.305) 0:06:23.694 ******* 2026-04-01 00:59:23.535725 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:59:23.535730 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:59:23.535787 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:59:23.535792 | orchestrator | 2026-04-01 00:59:23.535798 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-01 00:59:23.535803 | orchestrator | Wednesday 01 April 2026 00:55:16 +0000 (0:00:00.266) 0:06:23.960 ******* 2026-04-01 00:59:23.535814 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:59:23.535820 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:59:23.535826 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:59:23.535832 | orchestrator | 2026-04-01 00:59:23.535838 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-01 00:59:23.535853 | orchestrator | Wednesday 01 April 2026 00:55:17 +0000 (0:00:00.418) 0:06:24.379 ******* 2026-04-01 00:59:23.535858 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.535864 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.535870 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.535876 | orchestrator | 2026-04-01 00:59:23.535882 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-01 00:59:23.535888 | orchestrator | Wednesday 01 April 2026 00:55:17 +0000 (0:00:00.255) 0:06:24.634 ******* 2026-04-01 00:59:23.535899 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.535905 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.535911 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.535917 | orchestrator | 2026-04-01 00:59:23.535923 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-01 00:59:23.535929 | orchestrator | Wednesday 01 April 2026 00:55:17 +0000 (0:00:00.253) 0:06:24.888 ******* 2026-04-01 00:59:23.535936 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.535941 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.535945 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.535948 | orchestrator | 2026-04-01 00:59:23.535952 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-01 00:59:23.535956 | orchestrator | Wednesday 01 April 2026 00:55:17 +0000 (0:00:00.247) 0:06:25.136 ******* 2026-04-01 00:59:23.535960 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:59:23.535964 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:59:23.535968 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:59:23.535974 | orchestrator | 2026-04-01 00:59:23.535981 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-01 00:59:23.535985 | orchestrator | Wednesday 01 April 2026 00:55:18 +0000 (0:00:00.436) 0:06:25.573 ******* 2026-04-01 00:59:23.535989 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:59:23.535993 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:59:23.535997 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:59:23.536000 | orchestrator | 2026-04-01 00:59:23.536006 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-04-01 00:59:23.536012 | orchestrator | Wednesday 01 April 2026 00:55:18 +0000 (0:00:00.455) 0:06:26.028 ******* 2026-04-01 00:59:23.536018 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:59:23.536024 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:59:23.536034 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:59:23.536044 | orchestrator | 2026-04-01 00:59:23.536051 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-04-01 00:59:23.536056 | orchestrator | Wednesday 01 April 2026 00:55:18 +0000 (0:00:00.273) 0:06:26.302 ******* 2026-04-01 00:59:23.536062 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-01 00:59:23.536068 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-01 00:59:23.536074 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-01 00:59:23.536080 | orchestrator | 2026-04-01 00:59:23.536085 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-04-01 00:59:23.536090 | orchestrator | Wednesday 01 April 2026 00:55:19 +0000 (0:00:00.723) 0:06:27.025 ******* 2026-04-01 00:59:23.536096 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:59:23.536102 | orchestrator | 2026-04-01 00:59:23.536107 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-04-01 00:59:23.536112 | orchestrator | Wednesday 01 April 2026 00:55:20 +0000 (0:00:00.628) 0:06:27.654 ******* 2026-04-01 00:59:23.536119 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.536124 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.536130 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.536135 | orchestrator | 2026-04-01 00:59:23.536141 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-04-01 00:59:23.536146 | orchestrator | Wednesday 01 April 2026 00:55:20 +0000 (0:00:00.252) 0:06:27.906 ******* 2026-04-01 00:59:23.536152 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.536157 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.536162 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.536168 | orchestrator | 2026-04-01 00:59:23.536173 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-04-01 00:59:23.536179 | orchestrator | Wednesday 01 April 2026 00:55:20 +0000 (0:00:00.233) 0:06:28.140 ******* 2026-04-01 00:59:23.536192 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:59:23.536197 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:59:23.536203 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:59:23.536208 | orchestrator | 2026-04-01 00:59:23.536213 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-04-01 00:59:23.536219 | orchestrator | Wednesday 01 April 2026 00:55:21 +0000 (0:00:00.785) 0:06:28.926 ******* 2026-04-01 00:59:23.536225 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:59:23.536231 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:59:23.536237 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:59:23.536241 | orchestrator | 2026-04-01 00:59:23.536244 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-04-01 00:59:23.536248 | orchestrator | Wednesday 01 April 2026 00:55:21 +0000 (0:00:00.347) 0:06:29.274 ******* 2026-04-01 00:59:23.536252 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-04-01 00:59:23.536257 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-04-01 00:59:23.536260 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-04-01 00:59:23.536264 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-04-01 00:59:23.536272 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-04-01 00:59:23.536276 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-04-01 00:59:23.536290 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-04-01 00:59:23.536294 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-04-01 00:59:23.536298 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-04-01 00:59:23.536302 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-04-01 00:59:23.536306 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-04-01 00:59:23.536309 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-04-01 00:59:23.536313 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-04-01 00:59:23.536317 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-04-01 00:59:23.536321 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-04-01 00:59:23.536324 | orchestrator | 2026-04-01 00:59:23.536328 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-04-01 00:59:23.536332 | orchestrator | Wednesday 01 April 2026 00:55:26 +0000 (0:00:04.978) 0:06:34.252 ******* 2026-04-01 00:59:23.536336 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.536340 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.536343 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.536347 | orchestrator | 2026-04-01 00:59:23.536351 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-04-01 00:59:23.536355 | orchestrator | Wednesday 01 April 2026 00:55:27 +0000 (0:00:00.260) 0:06:34.512 ******* 2026-04-01 00:59:23.536359 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:59:23.536362 | orchestrator | 2026-04-01 00:59:23.536366 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-04-01 00:59:23.536370 | orchestrator | Wednesday 01 April 2026 00:55:27 +0000 (0:00:00.633) 0:06:35.146 ******* 2026-04-01 00:59:23.536374 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-04-01 00:59:23.536378 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-04-01 00:59:23.536382 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-04-01 00:59:23.536394 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-04-01 00:59:23.536398 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-04-01 00:59:23.536402 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-04-01 00:59:23.536406 | orchestrator | 2026-04-01 00:59:23.536410 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-04-01 00:59:23.536414 | orchestrator | Wednesday 01 April 2026 00:55:28 +0000 (0:00:01.075) 0:06:36.221 ******* 2026-04-01 00:59:23.536418 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-01 00:59:23.536422 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-01 00:59:23.536425 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-01 00:59:23.536429 | orchestrator | 2026-04-01 00:59:23.536433 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-04-01 00:59:23.536437 | orchestrator | Wednesday 01 April 2026 00:55:30 +0000 (0:00:01.723) 0:06:37.945 ******* 2026-04-01 00:59:23.536441 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-01 00:59:23.536445 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-01 00:59:23.536449 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:59:23.536452 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-01 00:59:23.536456 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-01 00:59:23.536460 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:59:23.536464 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-01 00:59:23.536468 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-01 00:59:23.536471 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:59:23.536475 | orchestrator | 2026-04-01 00:59:23.536479 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-04-01 00:59:23.536483 | orchestrator | Wednesday 01 April 2026 00:55:31 +0000 (0:00:01.353) 0:06:39.298 ******* 2026-04-01 00:59:23.536486 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-01 00:59:23.536490 | orchestrator | 2026-04-01 00:59:23.536494 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-04-01 00:59:23.536498 | orchestrator | Wednesday 01 April 2026 00:55:34 +0000 (0:00:02.670) 0:06:41.969 ******* 2026-04-01 00:59:23.536502 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:59:23.536505 | orchestrator | 2026-04-01 00:59:23.536509 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-04-01 00:59:23.536513 | orchestrator | Wednesday 01 April 2026 00:55:35 +0000 (0:00:00.464) 0:06:42.434 ******* 2026-04-01 00:59:23.536517 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-00bcfd13-59f0-54da-b43f-34edf6af7c7d', 'data_vg': 'ceph-00bcfd13-59f0-54da-b43f-34edf6af7c7d'}) 2026-04-01 00:59:23.536524 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-070a6fcd-e232-5822-bdac-2856eb469583', 'data_vg': 'ceph-070a6fcd-e232-5822-bdac-2856eb469583'}) 2026-04-01 00:59:23.536530 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-c7c10550-c1bc-5fe3-90d5-7d7a9167f51f', 'data_vg': 'ceph-c7c10550-c1bc-5fe3-90d5-7d7a9167f51f'}) 2026-04-01 00:59:23.536537 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-2f8eedd5-4e35-5081-a67e-565e77fef082', 'data_vg': 'ceph-2f8eedd5-4e35-5081-a67e-565e77fef082'}) 2026-04-01 00:59:23.536541 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-24dba708-820d-5543-af14-6cbe38251993', 'data_vg': 'ceph-24dba708-820d-5543-af14-6cbe38251993'}) 2026-04-01 00:59:23.536545 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-d3162267-511d-5f73-a1c4-60a47e452e5f', 'data_vg': 'ceph-d3162267-511d-5f73-a1c4-60a47e452e5f'}) 2026-04-01 00:59:23.536549 | orchestrator | 2026-04-01 00:59:23.536553 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-04-01 00:59:23.536564 | orchestrator | Wednesday 01 April 2026 00:56:14 +0000 (0:00:39.716) 0:07:22.151 ******* 2026-04-01 00:59:23.536570 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.536579 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.536588 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.536593 | orchestrator | 2026-04-01 00:59:23.536598 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-04-01 00:59:23.536604 | orchestrator | Wednesday 01 April 2026 00:56:15 +0000 (0:00:00.560) 0:07:22.711 ******* 2026-04-01 00:59:23.536609 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:59:23.536616 | orchestrator | 2026-04-01 00:59:23.536622 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-04-01 00:59:23.536627 | orchestrator | Wednesday 01 April 2026 00:56:15 +0000 (0:00:00.511) 0:07:23.223 ******* 2026-04-01 00:59:23.536633 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:59:23.536640 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:59:23.536645 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:59:23.536651 | orchestrator | 2026-04-01 00:59:23.536656 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-04-01 00:59:23.536662 | orchestrator | Wednesday 01 April 2026 00:56:16 +0000 (0:00:00.673) 0:07:23.896 ******* 2026-04-01 00:59:23.536667 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:59:23.536673 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:59:23.536678 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:59:23.536684 | orchestrator | 2026-04-01 00:59:23.536690 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-04-01 00:59:23.536695 | orchestrator | Wednesday 01 April 2026 00:56:19 +0000 (0:00:02.582) 0:07:26.478 ******* 2026-04-01 00:59:23.536701 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:59:23.536707 | orchestrator | 2026-04-01 00:59:23.536714 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-04-01 00:59:23.536718 | orchestrator | Wednesday 01 April 2026 00:56:19 +0000 (0:00:00.503) 0:07:26.982 ******* 2026-04-01 00:59:23.536722 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:59:23.536726 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:59:23.536730 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:59:23.536759 | orchestrator | 2026-04-01 00:59:23.536763 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-04-01 00:59:23.536767 | orchestrator | Wednesday 01 April 2026 00:56:20 +0000 (0:00:01.286) 0:07:28.268 ******* 2026-04-01 00:59:23.536771 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:59:23.536775 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:59:23.536779 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:59:23.536783 | orchestrator | 2026-04-01 00:59:23.536786 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-04-01 00:59:23.536790 | orchestrator | Wednesday 01 April 2026 00:56:22 +0000 (0:00:01.426) 0:07:29.695 ******* 2026-04-01 00:59:23.536794 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:59:23.536798 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:59:23.536802 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:59:23.536805 | orchestrator | 2026-04-01 00:59:23.536809 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-04-01 00:59:23.536813 | orchestrator | Wednesday 01 April 2026 00:56:24 +0000 (0:00:02.021) 0:07:31.716 ******* 2026-04-01 00:59:23.536817 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.536821 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.536825 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.536829 | orchestrator | 2026-04-01 00:59:23.536832 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-04-01 00:59:23.536836 | orchestrator | Wednesday 01 April 2026 00:56:24 +0000 (0:00:00.310) 0:07:32.026 ******* 2026-04-01 00:59:23.536840 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.536844 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.536853 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.536856 | orchestrator | 2026-04-01 00:59:23.536860 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-04-01 00:59:23.536864 | orchestrator | Wednesday 01 April 2026 00:56:25 +0000 (0:00:00.321) 0:07:32.348 ******* 2026-04-01 00:59:23.536868 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-01 00:59:23.536872 | orchestrator | ok: [testbed-node-4] => (item=1) 2026-04-01 00:59:23.536876 | orchestrator | ok: [testbed-node-5] => (item=2) 2026-04-01 00:59:23.536880 | orchestrator | ok: [testbed-node-3] => (item=4) 2026-04-01 00:59:23.536883 | orchestrator | ok: [testbed-node-4] => (item=3) 2026-04-01 00:59:23.536887 | orchestrator | ok: [testbed-node-5] => (item=5) 2026-04-01 00:59:23.536891 | orchestrator | 2026-04-01 00:59:23.536895 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-04-01 00:59:23.536899 | orchestrator | Wednesday 01 April 2026 00:56:26 +0000 (0:00:01.393) 0:07:33.742 ******* 2026-04-01 00:59:23.536903 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-04-01 00:59:23.536907 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-04-01 00:59:23.536910 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-04-01 00:59:23.536918 | orchestrator | changed: [testbed-node-3] => (item=4) 2026-04-01 00:59:23.536922 | orchestrator | changed: [testbed-node-4] => (item=3) 2026-04-01 00:59:23.536926 | orchestrator | changed: [testbed-node-5] => (item=5) 2026-04-01 00:59:23.536930 | orchestrator | 2026-04-01 00:59:23.536938 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-04-01 00:59:23.536942 | orchestrator | Wednesday 01 April 2026 00:56:28 +0000 (0:00:01.980) 0:07:35.722 ******* 2026-04-01 00:59:23.536946 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-04-01 00:59:23.536950 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-04-01 00:59:23.536954 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-04-01 00:59:23.536957 | orchestrator | changed: [testbed-node-3] => (item=4) 2026-04-01 00:59:23.536961 | orchestrator | changed: [testbed-node-5] => (item=5) 2026-04-01 00:59:23.536965 | orchestrator | changed: [testbed-node-4] => (item=3) 2026-04-01 00:59:23.536969 | orchestrator | 2026-04-01 00:59:23.536973 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-04-01 00:59:23.536977 | orchestrator | Wednesday 01 April 2026 00:56:32 +0000 (0:00:03.996) 0:07:39.718 ******* 2026-04-01 00:59:23.536980 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.536984 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.536988 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-01 00:59:23.536992 | orchestrator | 2026-04-01 00:59:23.536999 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-04-01 00:59:23.537005 | orchestrator | Wednesday 01 April 2026 00:56:35 +0000 (0:00:02.815) 0:07:42.534 ******* 2026-04-01 00:59:23.537011 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.537019 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.537026 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-04-01 00:59:23.537034 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-01 00:59:23.537039 | orchestrator | 2026-04-01 00:59:23.537045 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-04-01 00:59:23.537051 | orchestrator | Wednesday 01 April 2026 00:56:47 +0000 (0:00:12.716) 0:07:55.251 ******* 2026-04-01 00:59:23.537057 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.537063 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.537068 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.537074 | orchestrator | 2026-04-01 00:59:23.537079 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-01 00:59:23.537084 | orchestrator | Wednesday 01 April 2026 00:56:48 +0000 (0:00:00.726) 0:07:55.977 ******* 2026-04-01 00:59:23.537090 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.537101 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.537107 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.537113 | orchestrator | 2026-04-01 00:59:23.537118 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-04-01 00:59:23.537124 | orchestrator | Wednesday 01 April 2026 00:56:49 +0000 (0:00:00.437) 0:07:56.414 ******* 2026-04-01 00:59:23.537129 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:59:23.537135 | orchestrator | 2026-04-01 00:59:23.537141 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-04-01 00:59:23.537147 | orchestrator | Wednesday 01 April 2026 00:56:49 +0000 (0:00:00.465) 0:07:56.880 ******* 2026-04-01 00:59:23.537153 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-01 00:59:23.537160 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-01 00:59:23.537166 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-01 00:59:23.537171 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.537178 | orchestrator | 2026-04-01 00:59:23.537183 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-04-01 00:59:23.537188 | orchestrator | Wednesday 01 April 2026 00:56:49 +0000 (0:00:00.367) 0:07:57.247 ******* 2026-04-01 00:59:23.537192 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.537196 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.537199 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.537203 | orchestrator | 2026-04-01 00:59:23.537207 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-04-01 00:59:23.537211 | orchestrator | Wednesday 01 April 2026 00:56:50 +0000 (0:00:00.332) 0:07:57.580 ******* 2026-04-01 00:59:23.537215 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.537218 | orchestrator | 2026-04-01 00:59:23.537222 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-04-01 00:59:23.537226 | orchestrator | Wednesday 01 April 2026 00:56:50 +0000 (0:00:00.203) 0:07:57.784 ******* 2026-04-01 00:59:23.537230 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.537234 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.537238 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.537242 | orchestrator | 2026-04-01 00:59:23.537245 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-04-01 00:59:23.537249 | orchestrator | Wednesday 01 April 2026 00:56:51 +0000 (0:00:00.544) 0:07:58.328 ******* 2026-04-01 00:59:23.537253 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.537257 | orchestrator | 2026-04-01 00:59:23.537261 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-04-01 00:59:23.537264 | orchestrator | Wednesday 01 April 2026 00:56:51 +0000 (0:00:00.222) 0:07:58.551 ******* 2026-04-01 00:59:23.537269 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.537275 | orchestrator | 2026-04-01 00:59:23.537281 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-04-01 00:59:23.537287 | orchestrator | Wednesday 01 April 2026 00:56:51 +0000 (0:00:00.208) 0:07:58.759 ******* 2026-04-01 00:59:23.537292 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.537301 | orchestrator | 2026-04-01 00:59:23.537307 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-04-01 00:59:23.537317 | orchestrator | Wednesday 01 April 2026 00:56:51 +0000 (0:00:00.122) 0:07:58.882 ******* 2026-04-01 00:59:23.537323 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.537328 | orchestrator | 2026-04-01 00:59:23.537340 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-04-01 00:59:23.537345 | orchestrator | Wednesday 01 April 2026 00:56:51 +0000 (0:00:00.215) 0:07:59.098 ******* 2026-04-01 00:59:23.537357 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.537363 | orchestrator | 2026-04-01 00:59:23.537372 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-04-01 00:59:23.537384 | orchestrator | Wednesday 01 April 2026 00:56:51 +0000 (0:00:00.206) 0:07:59.304 ******* 2026-04-01 00:59:23.537389 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-01 00:59:23.537395 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-01 00:59:23.537401 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-01 00:59:23.537407 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.537412 | orchestrator | 2026-04-01 00:59:23.537418 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-04-01 00:59:23.537423 | orchestrator | Wednesday 01 April 2026 00:56:52 +0000 (0:00:00.398) 0:07:59.703 ******* 2026-04-01 00:59:23.537429 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.537434 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.537440 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.537445 | orchestrator | 2026-04-01 00:59:23.537451 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-04-01 00:59:23.537457 | orchestrator | Wednesday 01 April 2026 00:56:52 +0000 (0:00:00.288) 0:07:59.991 ******* 2026-04-01 00:59:23.537463 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.537469 | orchestrator | 2026-04-01 00:59:23.537474 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-04-01 00:59:23.537480 | orchestrator | Wednesday 01 April 2026 00:56:53 +0000 (0:00:00.665) 0:08:00.656 ******* 2026-04-01 00:59:23.537486 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.537491 | orchestrator | 2026-04-01 00:59:23.537497 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-04-01 00:59:23.537502 | orchestrator | 2026-04-01 00:59:23.537508 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-01 00:59:23.537514 | orchestrator | Wednesday 01 April 2026 00:56:53 +0000 (0:00:00.568) 0:08:01.225 ******* 2026-04-01 00:59:23.537521 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:59:23.537530 | orchestrator | 2026-04-01 00:59:23.537535 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-01 00:59:23.537541 | orchestrator | Wednesday 01 April 2026 00:56:54 +0000 (0:00:00.975) 0:08:02.200 ******* 2026-04-01 00:59:23.537548 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:59:23.537554 | orchestrator | 2026-04-01 00:59:23.537559 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-01 00:59:23.537563 | orchestrator | Wednesday 01 April 2026 00:56:55 +0000 (0:00:00.959) 0:08:03.160 ******* 2026-04-01 00:59:23.537567 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.537571 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.537575 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.537579 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:59:23.537583 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:59:23.537588 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:59:23.537595 | orchestrator | 2026-04-01 00:59:23.537600 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-01 00:59:23.537606 | orchestrator | Wednesday 01 April 2026 00:56:56 +0000 (0:00:00.976) 0:08:04.136 ******* 2026-04-01 00:59:23.537612 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:59:23.537618 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.537624 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.537631 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.537637 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:59:23.537643 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:59:23.537649 | orchestrator | 2026-04-01 00:59:23.537656 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-01 00:59:23.537662 | orchestrator | Wednesday 01 April 2026 00:56:57 +0000 (0:00:00.928) 0:08:05.065 ******* 2026-04-01 00:59:23.537673 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.537677 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.537681 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:59:23.537685 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.537689 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:59:23.537693 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:59:23.537697 | orchestrator | 2026-04-01 00:59:23.537701 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-01 00:59:23.537707 | orchestrator | Wednesday 01 April 2026 00:56:58 +0000 (0:00:00.736) 0:08:05.801 ******* 2026-04-01 00:59:23.537713 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:59:23.537718 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.537722 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.537726 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.537730 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:59:23.537750 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:59:23.537756 | orchestrator | 2026-04-01 00:59:23.537762 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-01 00:59:23.537768 | orchestrator | Wednesday 01 April 2026 00:56:59 +0000 (0:00:01.022) 0:08:06.824 ******* 2026-04-01 00:59:23.537774 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.537780 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.537786 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.537792 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:59:23.537797 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:59:23.537805 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:59:23.537809 | orchestrator | 2026-04-01 00:59:23.537813 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-01 00:59:23.537817 | orchestrator | Wednesday 01 April 2026 00:57:00 +0000 (0:00:01.095) 0:08:07.919 ******* 2026-04-01 00:59:23.537821 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.537829 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.537833 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.537839 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.537845 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.537855 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.537859 | orchestrator | 2026-04-01 00:59:23.537863 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-01 00:59:23.537867 | orchestrator | Wednesday 01 April 2026 00:57:01 +0000 (0:00:00.950) 0:08:08.869 ******* 2026-04-01 00:59:23.537871 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.537875 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.537878 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.537884 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.537890 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.537896 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.537904 | orchestrator | 2026-04-01 00:59:23.537912 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-01 00:59:23.537918 | orchestrator | Wednesday 01 April 2026 00:57:02 +0000 (0:00:00.569) 0:08:09.439 ******* 2026-04-01 00:59:23.537924 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:59:23.537930 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:59:23.537936 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:59:23.537942 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:59:23.537948 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:59:23.537954 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:59:23.537960 | orchestrator | 2026-04-01 00:59:23.537964 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-01 00:59:23.537968 | orchestrator | Wednesday 01 April 2026 00:57:03 +0000 (0:00:01.494) 0:08:10.933 ******* 2026-04-01 00:59:23.537972 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:59:23.537975 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:59:23.537979 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:59:23.537983 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:59:23.537992 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:59:23.537998 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:59:23.538003 | orchestrator | 2026-04-01 00:59:23.538009 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-01 00:59:23.538059 | orchestrator | Wednesday 01 April 2026 00:57:04 +0000 (0:00:01.038) 0:08:11.972 ******* 2026-04-01 00:59:23.538065 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.538071 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.538079 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.538084 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.538089 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.538095 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.538102 | orchestrator | 2026-04-01 00:59:23.538107 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-01 00:59:23.538112 | orchestrator | Wednesday 01 April 2026 00:57:05 +0000 (0:00:00.645) 0:08:12.618 ******* 2026-04-01 00:59:23.538118 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.538124 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.538131 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.538136 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:59:23.538142 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:59:23.538147 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:59:23.538153 | orchestrator | 2026-04-01 00:59:23.538159 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-01 00:59:23.538164 | orchestrator | Wednesday 01 April 2026 00:57:05 +0000 (0:00:00.518) 0:08:13.137 ******* 2026-04-01 00:59:23.538170 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:59:23.538175 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:59:23.538183 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:59:23.538189 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.538195 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.538200 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.538209 | orchestrator | 2026-04-01 00:59:23.538218 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-01 00:59:23.538224 | orchestrator | Wednesday 01 April 2026 00:57:06 +0000 (0:00:00.792) 0:08:13.929 ******* 2026-04-01 00:59:23.538230 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:59:23.538236 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:59:23.538241 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:59:23.538247 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.538253 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.538259 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.538265 | orchestrator | 2026-04-01 00:59:23.538271 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-01 00:59:23.538277 | orchestrator | Wednesday 01 April 2026 00:57:07 +0000 (0:00:00.520) 0:08:14.449 ******* 2026-04-01 00:59:23.538283 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:59:23.538289 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:59:23.538295 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:59:23.538301 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.538307 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.538313 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.538320 | orchestrator | 2026-04-01 00:59:23.538324 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-01 00:59:23.538328 | orchestrator | Wednesday 01 April 2026 00:57:07 +0000 (0:00:00.697) 0:08:15.147 ******* 2026-04-01 00:59:23.538332 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.538336 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.538340 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.538344 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.538348 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.538351 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.538355 | orchestrator | 2026-04-01 00:59:23.538359 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-01 00:59:23.538370 | orchestrator | Wednesday 01 April 2026 00:57:08 +0000 (0:00:00.618) 0:08:15.765 ******* 2026-04-01 00:59:23.538374 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.538378 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.538383 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.538389 | orchestrator | skipping: [testbed-node-0] 2026-04-01 00:59:23.538395 | orchestrator | skipping: [testbed-node-1] 2026-04-01 00:59:23.538401 | orchestrator | skipping: [testbed-node-2] 2026-04-01 00:59:23.538407 | orchestrator | 2026-04-01 00:59:23.538413 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-01 00:59:23.538423 | orchestrator | Wednesday 01 April 2026 00:57:09 +0000 (0:00:00.853) 0:08:16.618 ******* 2026-04-01 00:59:23.538428 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.538431 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.538437 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.538444 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:59:23.538458 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:59:23.538464 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:59:23.538469 | orchestrator | 2026-04-01 00:59:23.538475 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-01 00:59:23.538481 | orchestrator | Wednesday 01 April 2026 00:57:09 +0000 (0:00:00.606) 0:08:17.225 ******* 2026-04-01 00:59:23.538486 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:59:23.538492 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:59:23.538499 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:59:23.538505 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:59:23.538511 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:59:23.538518 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:59:23.538524 | orchestrator | 2026-04-01 00:59:23.538531 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-01 00:59:23.538538 | orchestrator | Wednesday 01 April 2026 00:57:10 +0000 (0:00:00.889) 0:08:18.115 ******* 2026-04-01 00:59:23.538543 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:59:23.538547 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:59:23.538553 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:59:23.538559 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:59:23.538565 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:59:23.538570 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:59:23.538576 | orchestrator | 2026-04-01 00:59:23.538582 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-04-01 00:59:23.538588 | orchestrator | Wednesday 01 April 2026 00:57:12 +0000 (0:00:01.258) 0:08:19.373 ******* 2026-04-01 00:59:23.538594 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-01 00:59:23.538601 | orchestrator | 2026-04-01 00:59:23.538607 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-04-01 00:59:23.538614 | orchestrator | Wednesday 01 April 2026 00:57:16 +0000 (0:00:04.055) 0:08:23.428 ******* 2026-04-01 00:59:23.538620 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-01 00:59:23.538626 | orchestrator | 2026-04-01 00:59:23.538633 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-04-01 00:59:23.538639 | orchestrator | Wednesday 01 April 2026 00:57:18 +0000 (0:00:01.910) 0:08:25.339 ******* 2026-04-01 00:59:23.538643 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:59:23.538646 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:59:23.538650 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:59:23.538654 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:59:23.538660 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:59:23.538666 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:59:23.538672 | orchestrator | 2026-04-01 00:59:23.538678 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-04-01 00:59:23.538684 | orchestrator | Wednesday 01 April 2026 00:57:19 +0000 (0:00:01.726) 0:08:27.065 ******* 2026-04-01 00:59:23.538691 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:59:23.538697 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:59:23.538708 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:59:23.538715 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:59:23.538721 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:59:23.538727 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:59:23.538780 | orchestrator | 2026-04-01 00:59:23.538788 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-04-01 00:59:23.538794 | orchestrator | Wednesday 01 April 2026 00:57:21 +0000 (0:00:01.323) 0:08:28.388 ******* 2026-04-01 00:59:23.538801 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:59:23.538810 | orchestrator | 2026-04-01 00:59:23.538816 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-04-01 00:59:23.538822 | orchestrator | Wednesday 01 April 2026 00:57:22 +0000 (0:00:01.174) 0:08:29.563 ******* 2026-04-01 00:59:23.538828 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:59:23.538835 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:59:23.538841 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:59:23.538847 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:59:23.538853 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:59:23.538859 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:59:23.538865 | orchestrator | 2026-04-01 00:59:23.538872 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-04-01 00:59:23.538878 | orchestrator | Wednesday 01 April 2026 00:57:24 +0000 (0:00:01.830) 0:08:31.394 ******* 2026-04-01 00:59:23.538885 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:59:23.538891 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:59:23.538897 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:59:23.538904 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:59:23.538910 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:59:23.538916 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:59:23.538922 | orchestrator | 2026-04-01 00:59:23.538928 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-04-01 00:59:23.538934 | orchestrator | Wednesday 01 April 2026 00:57:27 +0000 (0:00:03.790) 0:08:35.185 ******* 2026-04-01 00:59:23.538941 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 00:59:23.538947 | orchestrator | 2026-04-01 00:59:23.538954 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-04-01 00:59:23.538960 | orchestrator | Wednesday 01 April 2026 00:57:29 +0000 (0:00:01.200) 0:08:36.386 ******* 2026-04-01 00:59:23.538966 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:59:23.538973 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:59:23.538979 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:59:23.538985 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:59:23.538991 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:59:23.538998 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:59:23.539004 | orchestrator | 2026-04-01 00:59:23.539010 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-04-01 00:59:23.539021 | orchestrator | Wednesday 01 April 2026 00:57:29 +0000 (0:00:00.600) 0:08:36.986 ******* 2026-04-01 00:59:23.539028 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:59:23.539034 | orchestrator | changed: [testbed-node-0] 2026-04-01 00:59:23.539040 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:59:23.539046 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:59:23.539058 | orchestrator | changed: [testbed-node-1] 2026-04-01 00:59:23.539065 | orchestrator | changed: [testbed-node-2] 2026-04-01 00:59:23.539071 | orchestrator | 2026-04-01 00:59:23.539077 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-04-01 00:59:23.539084 | orchestrator | Wednesday 01 April 2026 00:57:32 +0000 (0:00:02.992) 0:08:39.979 ******* 2026-04-01 00:59:23.539090 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:59:23.539096 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:59:23.539102 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:59:23.539118 | orchestrator | ok: [testbed-node-0] 2026-04-01 00:59:23.539125 | orchestrator | ok: [testbed-node-1] 2026-04-01 00:59:23.539131 | orchestrator | ok: [testbed-node-2] 2026-04-01 00:59:23.539137 | orchestrator | 2026-04-01 00:59:23.539143 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-04-01 00:59:23.539149 | orchestrator | 2026-04-01 00:59:23.539156 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-01 00:59:23.539162 | orchestrator | Wednesday 01 April 2026 00:57:33 +0000 (0:00:00.855) 0:08:40.834 ******* 2026-04-01 00:59:23.539168 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:59:23.539175 | orchestrator | 2026-04-01 00:59:23.539181 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-01 00:59:23.539187 | orchestrator | Wednesday 01 April 2026 00:57:34 +0000 (0:00:00.767) 0:08:41.602 ******* 2026-04-01 00:59:23.539193 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:59:23.539200 | orchestrator | 2026-04-01 00:59:23.539206 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-01 00:59:23.539212 | orchestrator | Wednesday 01 April 2026 00:57:34 +0000 (0:00:00.517) 0:08:42.120 ******* 2026-04-01 00:59:23.539219 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.539225 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.539231 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.539237 | orchestrator | 2026-04-01 00:59:23.539244 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-01 00:59:23.539250 | orchestrator | Wednesday 01 April 2026 00:57:35 +0000 (0:00:00.287) 0:08:42.407 ******* 2026-04-01 00:59:23.539256 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:59:23.539262 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:59:23.539268 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:59:23.539275 | orchestrator | 2026-04-01 00:59:23.539281 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-01 00:59:23.539287 | orchestrator | Wednesday 01 April 2026 00:57:36 +0000 (0:00:01.037) 0:08:43.445 ******* 2026-04-01 00:59:23.539294 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:59:23.539300 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:59:23.539306 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:59:23.539313 | orchestrator | 2026-04-01 00:59:23.539319 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-01 00:59:23.539325 | orchestrator | Wednesday 01 April 2026 00:57:36 +0000 (0:00:00.702) 0:08:44.147 ******* 2026-04-01 00:59:23.539331 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:59:23.539338 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:59:23.539344 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:59:23.539350 | orchestrator | 2026-04-01 00:59:23.539356 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-01 00:59:23.539363 | orchestrator | Wednesday 01 April 2026 00:57:37 +0000 (0:00:00.707) 0:08:44.854 ******* 2026-04-01 00:59:23.539367 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.539371 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.539375 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.539379 | orchestrator | 2026-04-01 00:59:23.539383 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-01 00:59:23.539386 | orchestrator | Wednesday 01 April 2026 00:57:37 +0000 (0:00:00.251) 0:08:45.106 ******* 2026-04-01 00:59:23.539390 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.539394 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.539398 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.539402 | orchestrator | 2026-04-01 00:59:23.539406 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-01 00:59:23.539411 | orchestrator | Wednesday 01 April 2026 00:57:38 +0000 (0:00:00.444) 0:08:45.551 ******* 2026-04-01 00:59:23.539422 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.539426 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.539430 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.539434 | orchestrator | 2026-04-01 00:59:23.539438 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-01 00:59:23.539442 | orchestrator | Wednesday 01 April 2026 00:57:38 +0000 (0:00:00.244) 0:08:45.795 ******* 2026-04-01 00:59:23.539446 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:59:23.539450 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:59:23.539453 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:59:23.539457 | orchestrator | 2026-04-01 00:59:23.539461 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-01 00:59:23.539465 | orchestrator | Wednesday 01 April 2026 00:57:39 +0000 (0:00:00.772) 0:08:46.567 ******* 2026-04-01 00:59:23.539469 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:59:23.539473 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:59:23.539477 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:59:23.539481 | orchestrator | 2026-04-01 00:59:23.539485 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-01 00:59:23.539488 | orchestrator | Wednesday 01 April 2026 00:57:40 +0000 (0:00:00.849) 0:08:47.417 ******* 2026-04-01 00:59:23.539492 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.539496 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.539500 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.539504 | orchestrator | 2026-04-01 00:59:23.539508 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-01 00:59:23.539514 | orchestrator | Wednesday 01 April 2026 00:57:40 +0000 (0:00:00.441) 0:08:47.858 ******* 2026-04-01 00:59:23.539518 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.539522 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.539526 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.539530 | orchestrator | 2026-04-01 00:59:23.539537 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-01 00:59:23.539541 | orchestrator | Wednesday 01 April 2026 00:57:40 +0000 (0:00:00.263) 0:08:48.121 ******* 2026-04-01 00:59:23.539545 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:59:23.539549 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:59:23.539554 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:59:23.539560 | orchestrator | 2026-04-01 00:59:23.539566 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-01 00:59:23.539572 | orchestrator | Wednesday 01 April 2026 00:57:41 +0000 (0:00:00.275) 0:08:48.397 ******* 2026-04-01 00:59:23.539578 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:59:23.539584 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:59:23.539590 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:59:23.539597 | orchestrator | 2026-04-01 00:59:23.539603 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-01 00:59:23.539610 | orchestrator | Wednesday 01 April 2026 00:57:41 +0000 (0:00:00.292) 0:08:48.690 ******* 2026-04-01 00:59:23.539616 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:59:23.539622 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:59:23.539629 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:59:23.539633 | orchestrator | 2026-04-01 00:59:23.539637 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-01 00:59:23.539641 | orchestrator | Wednesday 01 April 2026 00:57:41 +0000 (0:00:00.463) 0:08:49.153 ******* 2026-04-01 00:59:23.539645 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.539649 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.539653 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.539657 | orchestrator | 2026-04-01 00:59:23.539661 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-01 00:59:23.539665 | orchestrator | Wednesday 01 April 2026 00:57:42 +0000 (0:00:00.267) 0:08:49.420 ******* 2026-04-01 00:59:23.539669 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.539672 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.539680 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.539684 | orchestrator | 2026-04-01 00:59:23.539688 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-01 00:59:23.539692 | orchestrator | Wednesday 01 April 2026 00:57:42 +0000 (0:00:00.261) 0:08:49.681 ******* 2026-04-01 00:59:23.539695 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.539699 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.539703 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.539707 | orchestrator | 2026-04-01 00:59:23.539711 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-01 00:59:23.539715 | orchestrator | Wednesday 01 April 2026 00:57:42 +0000 (0:00:00.259) 0:08:49.941 ******* 2026-04-01 00:59:23.539718 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:59:23.539722 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:59:23.539726 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:59:23.539730 | orchestrator | 2026-04-01 00:59:23.539750 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-01 00:59:23.539754 | orchestrator | Wednesday 01 April 2026 00:57:43 +0000 (0:00:00.489) 0:08:50.431 ******* 2026-04-01 00:59:23.539758 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:59:23.539762 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:59:23.539766 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:59:23.539770 | orchestrator | 2026-04-01 00:59:23.539774 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-04-01 00:59:23.539778 | orchestrator | Wednesday 01 April 2026 00:57:43 +0000 (0:00:00.522) 0:08:50.953 ******* 2026-04-01 00:59:23.539781 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.539785 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.539789 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-04-01 00:59:23.539794 | orchestrator | 2026-04-01 00:59:23.539798 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-04-01 00:59:23.539802 | orchestrator | Wednesday 01 April 2026 00:57:44 +0000 (0:00:00.644) 0:08:51.598 ******* 2026-04-01 00:59:23.539807 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-01 00:59:23.539814 | orchestrator | 2026-04-01 00:59:23.539820 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-04-01 00:59:23.539826 | orchestrator | Wednesday 01 April 2026 00:57:46 +0000 (0:00:01.932) 0:08:53.530 ******* 2026-04-01 00:59:23.539834 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-04-01 00:59:23.539841 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.539845 | orchestrator | 2026-04-01 00:59:23.539849 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-04-01 00:59:23.539852 | orchestrator | Wednesday 01 April 2026 00:57:46 +0000 (0:00:00.209) 0:08:53.740 ******* 2026-04-01 00:59:23.539858 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-01 00:59:23.539867 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-01 00:59:23.539872 | orchestrator | 2026-04-01 00:59:23.539876 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-04-01 00:59:23.539882 | orchestrator | Wednesday 01 April 2026 00:57:53 +0000 (0:00:07.086) 0:09:00.826 ******* 2026-04-01 00:59:23.539886 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-01 00:59:23.539890 | orchestrator | 2026-04-01 00:59:23.539898 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-04-01 00:59:23.539906 | orchestrator | Wednesday 01 April 2026 00:57:56 +0000 (0:00:03.303) 0:09:04.130 ******* 2026-04-01 00:59:23.539911 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:59:23.539915 | orchestrator | 2026-04-01 00:59:23.539918 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-04-01 00:59:23.539922 | orchestrator | Wednesday 01 April 2026 00:57:57 +0000 (0:00:00.454) 0:09:04.585 ******* 2026-04-01 00:59:23.539926 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-04-01 00:59:23.539930 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-04-01 00:59:23.539934 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-04-01 00:59:23.539938 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-04-01 00:59:23.539942 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-04-01 00:59:23.539946 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-04-01 00:59:23.539950 | orchestrator | 2026-04-01 00:59:23.539954 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-04-01 00:59:23.539957 | orchestrator | Wednesday 01 April 2026 00:57:58 +0000 (0:00:01.151) 0:09:05.737 ******* 2026-04-01 00:59:23.539961 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-01 00:59:23.539966 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-01 00:59:23.539969 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-01 00:59:23.539973 | orchestrator | 2026-04-01 00:59:23.539977 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-04-01 00:59:23.539981 | orchestrator | Wednesday 01 April 2026 00:58:00 +0000 (0:00:02.026) 0:09:07.764 ******* 2026-04-01 00:59:23.539985 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-01 00:59:23.539989 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-01 00:59:23.539993 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:59:23.539997 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-01 00:59:23.540001 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-01 00:59:23.540005 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:59:23.540008 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-01 00:59:23.540012 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-01 00:59:23.540016 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:59:23.540020 | orchestrator | 2026-04-01 00:59:23.540024 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-04-01 00:59:23.540028 | orchestrator | Wednesday 01 April 2026 00:58:01 +0000 (0:00:01.251) 0:09:09.016 ******* 2026-04-01 00:59:23.540032 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:59:23.540036 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:59:23.540039 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:59:23.540043 | orchestrator | 2026-04-01 00:59:23.540048 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-04-01 00:59:23.540055 | orchestrator | Wednesday 01 April 2026 00:58:04 +0000 (0:00:02.713) 0:09:11.729 ******* 2026-04-01 00:59:23.540060 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.540065 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.540070 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.540076 | orchestrator | 2026-04-01 00:59:23.540082 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-04-01 00:59:23.540089 | orchestrator | Wednesday 01 April 2026 00:58:04 +0000 (0:00:00.553) 0:09:12.282 ******* 2026-04-01 00:59:23.540095 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:59:23.540099 | orchestrator | 2026-04-01 00:59:23.540103 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-04-01 00:59:23.540112 | orchestrator | Wednesday 01 April 2026 00:58:05 +0000 (0:00:00.546) 0:09:12.829 ******* 2026-04-01 00:59:23.540116 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:59:23.540179 | orchestrator | 2026-04-01 00:59:23.540184 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-04-01 00:59:23.540188 | orchestrator | Wednesday 01 April 2026 00:58:06 +0000 (0:00:00.767) 0:09:13.597 ******* 2026-04-01 00:59:23.540192 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:59:23.540196 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:59:23.540200 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:59:23.540204 | orchestrator | 2026-04-01 00:59:23.540208 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-04-01 00:59:23.540212 | orchestrator | Wednesday 01 April 2026 00:58:07 +0000 (0:00:01.291) 0:09:14.888 ******* 2026-04-01 00:59:23.540215 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:59:23.540219 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:59:23.540226 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:59:23.540232 | orchestrator | 2026-04-01 00:59:23.540238 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-04-01 00:59:23.540244 | orchestrator | Wednesday 01 April 2026 00:58:08 +0000 (0:00:01.194) 0:09:16.082 ******* 2026-04-01 00:59:23.540250 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:59:23.540256 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:59:23.540262 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:59:23.540268 | orchestrator | 2026-04-01 00:59:23.540274 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-04-01 00:59:23.540279 | orchestrator | Wednesday 01 April 2026 00:58:10 +0000 (0:00:01.928) 0:09:18.011 ******* 2026-04-01 00:59:23.540289 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:59:23.540296 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:59:23.540302 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:59:23.540309 | orchestrator | 2026-04-01 00:59:23.540323 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-04-01 00:59:23.540330 | orchestrator | Wednesday 01 April 2026 00:58:12 +0000 (0:00:02.225) 0:09:20.237 ******* 2026-04-01 00:59:23.540336 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:59:23.540342 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:59:23.540348 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:59:23.540355 | orchestrator | 2026-04-01 00:59:23.540361 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-01 00:59:23.540368 | orchestrator | Wednesday 01 April 2026 00:58:14 +0000 (0:00:01.133) 0:09:21.370 ******* 2026-04-01 00:59:23.540374 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:59:23.540381 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:59:23.540388 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:59:23.540392 | orchestrator | 2026-04-01 00:59:23.540396 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-04-01 00:59:23.540399 | orchestrator | Wednesday 01 April 2026 00:58:14 +0000 (0:00:00.832) 0:09:22.203 ******* 2026-04-01 00:59:23.540404 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:59:23.540407 | orchestrator | 2026-04-01 00:59:23.540411 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-04-01 00:59:23.540415 | orchestrator | Wednesday 01 April 2026 00:58:15 +0000 (0:00:00.457) 0:09:22.661 ******* 2026-04-01 00:59:23.540421 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:59:23.540428 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:59:23.540434 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:59:23.540440 | orchestrator | 2026-04-01 00:59:23.540446 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-04-01 00:59:23.540452 | orchestrator | Wednesday 01 April 2026 00:58:15 +0000 (0:00:00.277) 0:09:22.938 ******* 2026-04-01 00:59:23.540459 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:59:23.540471 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:59:23.540478 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:59:23.540484 | orchestrator | 2026-04-01 00:59:23.540489 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-04-01 00:59:23.540493 | orchestrator | Wednesday 01 April 2026 00:58:16 +0000 (0:00:01.307) 0:09:24.246 ******* 2026-04-01 00:59:23.540497 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-01 00:59:23.540501 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-01 00:59:23.540505 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-01 00:59:23.540510 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.540516 | orchestrator | 2026-04-01 00:59:23.540522 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-04-01 00:59:23.540528 | orchestrator | Wednesday 01 April 2026 00:58:17 +0000 (0:00:00.880) 0:09:25.126 ******* 2026-04-01 00:59:23.540534 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:59:23.540540 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:59:23.540547 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:59:23.540553 | orchestrator | 2026-04-01 00:59:23.540559 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-04-01 00:59:23.540565 | orchestrator | 2026-04-01 00:59:23.540571 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-01 00:59:23.540577 | orchestrator | Wednesday 01 April 2026 00:58:18 +0000 (0:00:00.509) 0:09:25.636 ******* 2026-04-01 00:59:23.540584 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:59:23.540590 | orchestrator | 2026-04-01 00:59:23.540597 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-01 00:59:23.540603 | orchestrator | Wednesday 01 April 2026 00:58:18 +0000 (0:00:00.599) 0:09:26.236 ******* 2026-04-01 00:59:23.540609 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:59:23.540615 | orchestrator | 2026-04-01 00:59:23.540621 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-01 00:59:23.540625 | orchestrator | Wednesday 01 April 2026 00:58:19 +0000 (0:00:00.466) 0:09:26.702 ******* 2026-04-01 00:59:23.540628 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.540632 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.540636 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.540640 | orchestrator | 2026-04-01 00:59:23.540644 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-01 00:59:23.540648 | orchestrator | Wednesday 01 April 2026 00:58:19 +0000 (0:00:00.282) 0:09:26.985 ******* 2026-04-01 00:59:23.540651 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:59:23.540655 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:59:23.540659 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:59:23.540663 | orchestrator | 2026-04-01 00:59:23.540667 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-01 00:59:23.540671 | orchestrator | Wednesday 01 April 2026 00:58:20 +0000 (0:00:01.012) 0:09:27.998 ******* 2026-04-01 00:59:23.540674 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:59:23.540678 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:59:23.540682 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:59:23.540686 | orchestrator | 2026-04-01 00:59:23.540690 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-01 00:59:23.540693 | orchestrator | Wednesday 01 April 2026 00:58:21 +0000 (0:00:00.862) 0:09:28.861 ******* 2026-04-01 00:59:23.540697 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:59:23.540701 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:59:23.540705 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:59:23.540709 | orchestrator | 2026-04-01 00:59:23.540713 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-01 00:59:23.540716 | orchestrator | Wednesday 01 April 2026 00:58:22 +0000 (0:00:00.852) 0:09:29.713 ******* 2026-04-01 00:59:23.540724 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.540727 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.540853 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.540865 | orchestrator | 2026-04-01 00:59:23.540872 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-01 00:59:23.540885 | orchestrator | Wednesday 01 April 2026 00:58:22 +0000 (0:00:00.307) 0:09:30.021 ******* 2026-04-01 00:59:23.540891 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.540897 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.540908 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.540916 | orchestrator | 2026-04-01 00:59:23.540922 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-01 00:59:23.540928 | orchestrator | Wednesday 01 April 2026 00:58:23 +0000 (0:00:00.556) 0:09:30.577 ******* 2026-04-01 00:59:23.540933 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.540939 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.540945 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.540951 | orchestrator | 2026-04-01 00:59:23.540957 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-01 00:59:23.540963 | orchestrator | Wednesday 01 April 2026 00:58:23 +0000 (0:00:00.299) 0:09:30.877 ******* 2026-04-01 00:59:23.540969 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:59:23.540975 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:59:23.540980 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:59:23.540986 | orchestrator | 2026-04-01 00:59:23.540993 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-01 00:59:23.540999 | orchestrator | Wednesday 01 April 2026 00:58:24 +0000 (0:00:00.851) 0:09:31.729 ******* 2026-04-01 00:59:23.541005 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:59:23.541011 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:59:23.541017 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:59:23.541023 | orchestrator | 2026-04-01 00:59:23.541029 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-01 00:59:23.541035 | orchestrator | Wednesday 01 April 2026 00:58:25 +0000 (0:00:00.783) 0:09:32.512 ******* 2026-04-01 00:59:23.541042 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.541047 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.541051 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.541055 | orchestrator | 2026-04-01 00:59:23.541059 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-01 00:59:23.541063 | orchestrator | Wednesday 01 April 2026 00:58:25 +0000 (0:00:00.472) 0:09:32.984 ******* 2026-04-01 00:59:23.541067 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.541070 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.541074 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.541078 | orchestrator | 2026-04-01 00:59:23.541082 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-01 00:59:23.541086 | orchestrator | Wednesday 01 April 2026 00:58:25 +0000 (0:00:00.288) 0:09:33.273 ******* 2026-04-01 00:59:23.541089 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:59:23.541093 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:59:23.541097 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:59:23.541101 | orchestrator | 2026-04-01 00:59:23.541105 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-01 00:59:23.541109 | orchestrator | Wednesday 01 April 2026 00:58:26 +0000 (0:00:00.303) 0:09:33.576 ******* 2026-04-01 00:59:23.541112 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:59:23.541116 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:59:23.541120 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:59:23.541124 | orchestrator | 2026-04-01 00:59:23.541128 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-01 00:59:23.541132 | orchestrator | Wednesday 01 April 2026 00:58:26 +0000 (0:00:00.301) 0:09:33.877 ******* 2026-04-01 00:59:23.541136 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:59:23.541145 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:59:23.541149 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:59:23.541153 | orchestrator | 2026-04-01 00:59:23.541157 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-01 00:59:23.541160 | orchestrator | Wednesday 01 April 2026 00:58:27 +0000 (0:00:00.461) 0:09:34.338 ******* 2026-04-01 00:59:23.541164 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.541168 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.541172 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.541176 | orchestrator | 2026-04-01 00:59:23.541179 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-01 00:59:23.541183 | orchestrator | Wednesday 01 April 2026 00:58:27 +0000 (0:00:00.266) 0:09:34.605 ******* 2026-04-01 00:59:23.541187 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.541191 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.541195 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.541198 | orchestrator | 2026-04-01 00:59:23.541202 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-01 00:59:23.541206 | orchestrator | Wednesday 01 April 2026 00:58:27 +0000 (0:00:00.282) 0:09:34.887 ******* 2026-04-01 00:59:23.541210 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.541214 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.541218 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.541221 | orchestrator | 2026-04-01 00:59:23.541225 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-01 00:59:23.541229 | orchestrator | Wednesday 01 April 2026 00:58:27 +0000 (0:00:00.317) 0:09:35.204 ******* 2026-04-01 00:59:23.541233 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:59:23.541236 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:59:23.541240 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:59:23.541244 | orchestrator | 2026-04-01 00:59:23.541248 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-01 00:59:23.541252 | orchestrator | Wednesday 01 April 2026 00:58:28 +0000 (0:00:00.578) 0:09:35.783 ******* 2026-04-01 00:59:23.541256 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:59:23.541260 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:59:23.541264 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:59:23.541270 | orchestrator | 2026-04-01 00:59:23.541277 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-04-01 00:59:23.541281 | orchestrator | Wednesday 01 April 2026 00:58:28 +0000 (0:00:00.523) 0:09:36.306 ******* 2026-04-01 00:59:23.541285 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:59:23.541289 | orchestrator | 2026-04-01 00:59:23.541296 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-04-01 00:59:23.541300 | orchestrator | Wednesday 01 April 2026 00:58:29 +0000 (0:00:00.715) 0:09:37.022 ******* 2026-04-01 00:59:23.541308 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-01 00:59:23.541312 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-01 00:59:23.541316 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-01 00:59:23.541322 | orchestrator | 2026-04-01 00:59:23.541328 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-04-01 00:59:23.541333 | orchestrator | Wednesday 01 April 2026 00:58:31 +0000 (0:00:02.123) 0:09:39.145 ******* 2026-04-01 00:59:23.541339 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-01 00:59:23.541344 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-01 00:59:23.541350 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:59:23.541356 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-01 00:59:23.541362 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-01 00:59:23.541368 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:59:23.541374 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-01 00:59:23.541385 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-01 00:59:23.541390 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:59:23.541396 | orchestrator | 2026-04-01 00:59:23.541402 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-04-01 00:59:23.541407 | orchestrator | Wednesday 01 April 2026 00:58:33 +0000 (0:00:01.341) 0:09:40.487 ******* 2026-04-01 00:59:23.541413 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.541419 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.541425 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.541431 | orchestrator | 2026-04-01 00:59:23.541437 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-04-01 00:59:23.541443 | orchestrator | Wednesday 01 April 2026 00:58:33 +0000 (0:00:00.328) 0:09:40.815 ******* 2026-04-01 00:59:23.541449 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:59:23.541453 | orchestrator | 2026-04-01 00:59:23.541456 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-04-01 00:59:23.541460 | orchestrator | Wednesday 01 April 2026 00:58:34 +0000 (0:00:00.751) 0:09:41.566 ******* 2026-04-01 00:59:23.541465 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-01 00:59:23.541470 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-01 00:59:23.541474 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-01 00:59:23.541478 | orchestrator | 2026-04-01 00:59:23.541482 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-04-01 00:59:23.541485 | orchestrator | Wednesday 01 April 2026 00:58:35 +0000 (0:00:00.770) 0:09:42.337 ******* 2026-04-01 00:59:23.541489 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-01 00:59:23.541493 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-04-01 00:59:23.541497 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-01 00:59:23.541501 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-04-01 00:59:23.541505 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-01 00:59:23.541509 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-04-01 00:59:23.541512 | orchestrator | 2026-04-01 00:59:23.541516 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-04-01 00:59:23.541520 | orchestrator | Wednesday 01 April 2026 00:58:39 +0000 (0:00:04.022) 0:09:46.360 ******* 2026-04-01 00:59:23.541524 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-01 00:59:23.541528 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-01 00:59:23.541532 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-01 00:59:23.541535 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-01 00:59:23.541539 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-01 00:59:23.541543 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-01 00:59:23.541547 | orchestrator | 2026-04-01 00:59:23.541553 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-04-01 00:59:23.541559 | orchestrator | Wednesday 01 April 2026 00:58:41 +0000 (0:00:02.206) 0:09:48.566 ******* 2026-04-01 00:59:23.541567 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-01 00:59:23.541584 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:59:23.541591 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-01 00:59:23.541597 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:59:23.541604 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-01 00:59:23.541609 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:59:23.541614 | orchestrator | 2026-04-01 00:59:23.541620 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-04-01 00:59:23.541626 | orchestrator | Wednesday 01 April 2026 00:58:42 +0000 (0:00:01.642) 0:09:50.209 ******* 2026-04-01 00:59:23.541638 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-04-01 00:59:23.541644 | orchestrator | 2026-04-01 00:59:23.541650 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-04-01 00:59:23.541656 | orchestrator | Wednesday 01 April 2026 00:58:43 +0000 (0:00:00.221) 0:09:50.431 ******* 2026-04-01 00:59:23.541661 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-01 00:59:23.541669 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-01 00:59:23.541719 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-01 00:59:23.541727 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-01 00:59:23.541750 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-01 00:59:23.541756 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.541762 | orchestrator | 2026-04-01 00:59:23.541769 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-04-01 00:59:23.541775 | orchestrator | Wednesday 01 April 2026 00:58:43 +0000 (0:00:00.567) 0:09:50.999 ******* 2026-04-01 00:59:23.541782 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-01 00:59:23.541788 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-01 00:59:23.541794 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-01 00:59:23.541800 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-01 00:59:23.541807 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-01 00:59:23.541813 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.541819 | orchestrator | 2026-04-01 00:59:23.541826 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-04-01 00:59:23.541832 | orchestrator | Wednesday 01 April 2026 00:58:44 +0000 (0:00:00.579) 0:09:51.578 ******* 2026-04-01 00:59:23.541838 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-01 00:59:23.541844 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-01 00:59:23.541851 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-01 00:59:23.541857 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-01 00:59:23.541869 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-01 00:59:23.541875 | orchestrator | 2026-04-01 00:59:23.541881 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-04-01 00:59:23.541887 | orchestrator | Wednesday 01 April 2026 00:59:11 +0000 (0:00:26.734) 0:10:18.312 ******* 2026-04-01 00:59:23.541893 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.541899 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.541905 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.541910 | orchestrator | 2026-04-01 00:59:23.541916 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-04-01 00:59:23.541922 | orchestrator | Wednesday 01 April 2026 00:59:11 +0000 (0:00:00.269) 0:10:18.582 ******* 2026-04-01 00:59:23.541927 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.541933 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.541938 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.541944 | orchestrator | 2026-04-01 00:59:23.541949 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-04-01 00:59:23.541955 | orchestrator | Wednesday 01 April 2026 00:59:11 +0000 (0:00:00.425) 0:10:19.007 ******* 2026-04-01 00:59:23.541961 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:59:23.541966 | orchestrator | 2026-04-01 00:59:23.541971 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-04-01 00:59:23.541977 | orchestrator | Wednesday 01 April 2026 00:59:12 +0000 (0:00:00.457) 0:10:19.465 ******* 2026-04-01 00:59:23.541983 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:59:23.541989 | orchestrator | 2026-04-01 00:59:23.541998 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-04-01 00:59:23.542004 | orchestrator | Wednesday 01 April 2026 00:59:12 +0000 (0:00:00.441) 0:10:19.907 ******* 2026-04-01 00:59:23.542067 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:59:23.542077 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:59:23.542083 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:59:23.542090 | orchestrator | 2026-04-01 00:59:23.542096 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-04-01 00:59:23.542102 | orchestrator | Wednesday 01 April 2026 00:59:14 +0000 (0:00:01.452) 0:10:21.360 ******* 2026-04-01 00:59:23.542107 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:59:23.542113 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:59:23.542118 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:59:23.542125 | orchestrator | 2026-04-01 00:59:23.542131 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-04-01 00:59:23.542137 | orchestrator | Wednesday 01 April 2026 00:59:15 +0000 (0:00:01.238) 0:10:22.599 ******* 2026-04-01 00:59:23.542143 | orchestrator | changed: [testbed-node-3] 2026-04-01 00:59:23.542151 | orchestrator | changed: [testbed-node-4] 2026-04-01 00:59:23.542155 | orchestrator | changed: [testbed-node-5] 2026-04-01 00:59:23.542159 | orchestrator | 2026-04-01 00:59:23.542163 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-04-01 00:59:23.542167 | orchestrator | Wednesday 01 April 2026 00:59:17 +0000 (0:00:01.959) 0:10:24.559 ******* 2026-04-01 00:59:23.542171 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-01 00:59:23.542175 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-01 00:59:23.542179 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-01 00:59:23.542182 | orchestrator | 2026-04-01 00:59:23.542186 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-01 00:59:23.542196 | orchestrator | Wednesday 01 April 2026 00:59:19 +0000 (0:00:02.729) 0:10:27.289 ******* 2026-04-01 00:59:23.542200 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.542204 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.542207 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.542211 | orchestrator | 2026-04-01 00:59:23.542215 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-04-01 00:59:23.542219 | orchestrator | Wednesday 01 April 2026 00:59:20 +0000 (0:00:00.340) 0:10:27.629 ******* 2026-04-01 00:59:23.542223 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 00:59:23.542226 | orchestrator | 2026-04-01 00:59:23.542230 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-04-01 00:59:23.542234 | orchestrator | Wednesday 01 April 2026 00:59:21 +0000 (0:00:00.756) 0:10:28.386 ******* 2026-04-01 00:59:23.542238 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:59:23.542242 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:59:23.542246 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:59:23.542250 | orchestrator | 2026-04-01 00:59:23.542254 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-04-01 00:59:23.542257 | orchestrator | Wednesday 01 April 2026 00:59:21 +0000 (0:00:00.348) 0:10:28.735 ******* 2026-04-01 00:59:23.542261 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.542265 | orchestrator | skipping: [testbed-node-4] 2026-04-01 00:59:23.542269 | orchestrator | skipping: [testbed-node-5] 2026-04-01 00:59:23.542273 | orchestrator | 2026-04-01 00:59:23.542277 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-04-01 00:59:23.542280 | orchestrator | Wednesday 01 April 2026 00:59:21 +0000 (0:00:00.321) 0:10:29.056 ******* 2026-04-01 00:59:23.542284 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-01 00:59:23.542289 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-01 00:59:23.542295 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-01 00:59:23.542301 | orchestrator | skipping: [testbed-node-3] 2026-04-01 00:59:23.542310 | orchestrator | 2026-04-01 00:59:23.542317 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-04-01 00:59:23.542323 | orchestrator | Wednesday 01 April 2026 00:59:22 +0000 (0:00:00.863) 0:10:29.919 ******* 2026-04-01 00:59:23.542329 | orchestrator | ok: [testbed-node-3] 2026-04-01 00:59:23.542335 | orchestrator | ok: [testbed-node-4] 2026-04-01 00:59:23.542340 | orchestrator | ok: [testbed-node-5] 2026-04-01 00:59:23.542346 | orchestrator | 2026-04-01 00:59:23.542351 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 00:59:23.542357 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2026-04-01 00:59:23.542365 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-04-01 00:59:23.542370 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-04-01 00:59:23.542376 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2026-04-01 00:59:23.542383 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-04-01 00:59:23.542395 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-04-01 00:59:23.542402 | orchestrator | 2026-04-01 00:59:23.542408 | orchestrator | 2026-04-01 00:59:23.542415 | orchestrator | 2026-04-01 00:59:23.542426 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 00:59:23.542438 | orchestrator | Wednesday 01 April 2026 00:59:22 +0000 (0:00:00.242) 0:10:30.162 ******* 2026-04-01 00:59:23.542444 | orchestrator | =============================================================================== 2026-04-01 00:59:23.542450 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 53.13s 2026-04-01 00:59:23.542457 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 39.72s 2026-04-01 00:59:23.542463 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 26.73s 2026-04-01 00:59:23.542469 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 24.24s 2026-04-01 00:59:23.542475 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.66s 2026-04-01 00:59:23.542481 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 14.06s 2026-04-01 00:59:23.542487 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.72s 2026-04-01 00:59:23.542493 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.28s 2026-04-01 00:59:23.542499 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 8.31s 2026-04-01 00:59:23.542506 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 7.21s 2026-04-01 00:59:23.542512 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 7.09s 2026-04-01 00:59:23.542518 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.21s 2026-04-01 00:59:23.542525 | orchestrator | ceph-osd : Apply operating system tuning -------------------------------- 4.98s 2026-04-01 00:59:23.542531 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 4.59s 2026-04-01 00:59:23.542537 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 4.22s 2026-04-01 00:59:23.542543 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.06s 2026-04-01 00:59:23.542549 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.02s 2026-04-01 00:59:23.542555 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 4.00s 2026-04-01 00:59:23.542561 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.79s 2026-04-01 00:59:23.542567 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 3.49s 2026-04-01 00:59:23.542574 | orchestrator | 2026-04-01 00:59:23 | INFO  | Task 41c18ccd-98dc-4ede-9e51-19c306b8a389 is in state STARTED 2026-04-01 00:59:23.542580 | orchestrator | 2026-04-01 00:59:23 | INFO  | Task 3ae40d44-34a5-4253-b94f-2fde97cfc8a2 is in state STARTED 2026-04-01 00:59:23.542587 | orchestrator | 2026-04-01 00:59:23 | INFO  | Task 16103425-ac86-4ca5-8ac6-3cfeaa87c7ed is in state STARTED 2026-04-01 00:59:23.542593 | orchestrator | 2026-04-01 00:59:23 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:59:26.583889 | orchestrator | 2026-04-01 00:59:26 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 00:59:26.585239 | orchestrator | 2026-04-01 00:59:26 | INFO  | Task 41c18ccd-98dc-4ede-9e51-19c306b8a389 is in state STARTED 2026-04-01 00:59:26.587933 | orchestrator | 2026-04-01 00:59:26 | INFO  | Task 3eab0b43-bc52-4bba-a38d-674d001ee96c is in state STARTED 2026-04-01 00:59:26.590437 | orchestrator | 2026-04-01 00:59:26 | INFO  | Task 3ae40d44-34a5-4253-b94f-2fde97cfc8a2 is in state STARTED 2026-04-01 00:59:26.592788 | orchestrator | 2026-04-01 00:59:26 | INFO  | Task 16103425-ac86-4ca5-8ac6-3cfeaa87c7ed is in state STARTED 2026-04-01 00:59:26.592839 | orchestrator | 2026-04-01 00:59:26 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:59:29.629266 | orchestrator | 2026-04-01 00:59:29 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 00:59:29.630968 | orchestrator | 2026-04-01 00:59:29 | INFO  | Task 41c18ccd-98dc-4ede-9e51-19c306b8a389 is in state STARTED 2026-04-01 00:59:29.632430 | orchestrator | 2026-04-01 00:59:29 | INFO  | Task 3eab0b43-bc52-4bba-a38d-674d001ee96c is in state STARTED 2026-04-01 00:59:29.633831 | orchestrator | 2026-04-01 00:59:29 | INFO  | Task 3ae40d44-34a5-4253-b94f-2fde97cfc8a2 is in state STARTED 2026-04-01 00:59:29.634838 | orchestrator | 2026-04-01 00:59:29 | INFO  | Task 16103425-ac86-4ca5-8ac6-3cfeaa87c7ed is in state STARTED 2026-04-01 00:59:29.634953 | orchestrator | 2026-04-01 00:59:29 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:59:32.670420 | orchestrator | 2026-04-01 00:59:32 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 00:59:32.672136 | orchestrator | 2026-04-01 00:59:32 | INFO  | Task 41c18ccd-98dc-4ede-9e51-19c306b8a389 is in state STARTED 2026-04-01 00:59:32.673518 | orchestrator | 2026-04-01 00:59:32 | INFO  | Task 3eab0b43-bc52-4bba-a38d-674d001ee96c is in state STARTED 2026-04-01 00:59:32.675028 | orchestrator | 2026-04-01 00:59:32 | INFO  | Task 3ae40d44-34a5-4253-b94f-2fde97cfc8a2 is in state STARTED 2026-04-01 00:59:32.676290 | orchestrator | 2026-04-01 00:59:32 | INFO  | Task 16103425-ac86-4ca5-8ac6-3cfeaa87c7ed is in state STARTED 2026-04-01 00:59:32.676329 | orchestrator | 2026-04-01 00:59:32 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:59:35.719912 | orchestrator | 2026-04-01 00:59:35 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 00:59:35.722974 | orchestrator | 2026-04-01 00:59:35 | INFO  | Task 41c18ccd-98dc-4ede-9e51-19c306b8a389 is in state STARTED 2026-04-01 00:59:35.725426 | orchestrator | 2026-04-01 00:59:35 | INFO  | Task 3eab0b43-bc52-4bba-a38d-674d001ee96c is in state STARTED 2026-04-01 00:59:35.727756 | orchestrator | 2026-04-01 00:59:35 | INFO  | Task 3ae40d44-34a5-4253-b94f-2fde97cfc8a2 is in state STARTED 2026-04-01 00:59:35.730104 | orchestrator | 2026-04-01 00:59:35 | INFO  | Task 16103425-ac86-4ca5-8ac6-3cfeaa87c7ed is in state STARTED 2026-04-01 00:59:35.730153 | orchestrator | 2026-04-01 00:59:35 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:59:38.775609 | orchestrator | 2026-04-01 00:59:38 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 00:59:38.777079 | orchestrator | 2026-04-01 00:59:38 | INFO  | Task 41c18ccd-98dc-4ede-9e51-19c306b8a389 is in state STARTED 2026-04-01 00:59:38.778462 | orchestrator | 2026-04-01 00:59:38 | INFO  | Task 3eab0b43-bc52-4bba-a38d-674d001ee96c is in state STARTED 2026-04-01 00:59:38.779982 | orchestrator | 2026-04-01 00:59:38 | INFO  | Task 3ae40d44-34a5-4253-b94f-2fde97cfc8a2 is in state STARTED 2026-04-01 00:59:38.781077 | orchestrator | 2026-04-01 00:59:38 | INFO  | Task 16103425-ac86-4ca5-8ac6-3cfeaa87c7ed is in state STARTED 2026-04-01 00:59:38.781164 | orchestrator | 2026-04-01 00:59:38 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:59:41.812029 | orchestrator | 2026-04-01 00:59:41 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 00:59:41.812210 | orchestrator | 2026-04-01 00:59:41 | INFO  | Task 41c18ccd-98dc-4ede-9e51-19c306b8a389 is in state STARTED 2026-04-01 00:59:41.813461 | orchestrator | 2026-04-01 00:59:41 | INFO  | Task 3eab0b43-bc52-4bba-a38d-674d001ee96c is in state STARTED 2026-04-01 00:59:41.814235 | orchestrator | 2026-04-01 00:59:41 | INFO  | Task 3ae40d44-34a5-4253-b94f-2fde97cfc8a2 is in state STARTED 2026-04-01 00:59:41.815232 | orchestrator | 2026-04-01 00:59:41 | INFO  | Task 16103425-ac86-4ca5-8ac6-3cfeaa87c7ed is in state STARTED 2026-04-01 00:59:41.815291 | orchestrator | 2026-04-01 00:59:41 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:59:44.852125 | orchestrator | 2026-04-01 00:59:44 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 00:59:44.853283 | orchestrator | 2026-04-01 00:59:44 | INFO  | Task 41c18ccd-98dc-4ede-9e51-19c306b8a389 is in state SUCCESS 2026-04-01 00:59:44.854744 | orchestrator | 2026-04-01 00:59:44 | INFO  | Task 3eab0b43-bc52-4bba-a38d-674d001ee96c is in state STARTED 2026-04-01 00:59:44.856492 | orchestrator | 2026-04-01 00:59:44 | INFO  | Task 3ae40d44-34a5-4253-b94f-2fde97cfc8a2 is in state STARTED 2026-04-01 00:59:44.857323 | orchestrator | 2026-04-01 00:59:44 | INFO  | Task 16103425-ac86-4ca5-8ac6-3cfeaa87c7ed is in state SUCCESS 2026-04-01 00:59:44.857534 | orchestrator | 2026-04-01 00:59:44 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:59:47.894921 | orchestrator | 2026-04-01 00:59:47 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 00:59:47.897054 | orchestrator | 2026-04-01 00:59:47 | INFO  | Task 3eab0b43-bc52-4bba-a38d-674d001ee96c is in state STARTED 2026-04-01 00:59:47.900212 | orchestrator | 2026-04-01 00:59:47 | INFO  | Task 3ae40d44-34a5-4253-b94f-2fde97cfc8a2 is in state STARTED 2026-04-01 00:59:47.900901 | orchestrator | 2026-04-01 00:59:47 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:59:50.938603 | orchestrator | 2026-04-01 00:59:50 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 00:59:50.944136 | orchestrator | 2026-04-01 00:59:50 | INFO  | Task 3eab0b43-bc52-4bba-a38d-674d001ee96c is in state STARTED 2026-04-01 00:59:50.946375 | orchestrator | 2026-04-01 00:59:50 | INFO  | Task 3ae40d44-34a5-4253-b94f-2fde97cfc8a2 is in state STARTED 2026-04-01 00:59:50.946447 | orchestrator | 2026-04-01 00:59:50 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:59:53.998103 | orchestrator | 2026-04-01 00:59:53 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 00:59:53.999508 | orchestrator | 2026-04-01 00:59:54 | INFO  | Task 3eab0b43-bc52-4bba-a38d-674d001ee96c is in state STARTED 2026-04-01 00:59:54.001385 | orchestrator | 2026-04-01 00:59:54 | INFO  | Task 3ae40d44-34a5-4253-b94f-2fde97cfc8a2 is in state STARTED 2026-04-01 00:59:54.001489 | orchestrator | 2026-04-01 00:59:54 | INFO  | Wait 1 second(s) until the next check 2026-04-01 00:59:57.054563 | orchestrator | 2026-04-01 00:59:57 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 00:59:57.056549 | orchestrator | 2026-04-01 00:59:57 | INFO  | Task 3eab0b43-bc52-4bba-a38d-674d001ee96c is in state STARTED 2026-04-01 00:59:57.058520 | orchestrator | 2026-04-01 00:59:57 | INFO  | Task 3ae40d44-34a5-4253-b94f-2fde97cfc8a2 is in state STARTED 2026-04-01 00:59:57.058567 | orchestrator | 2026-04-01 00:59:57 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:00:00.096612 | orchestrator | 2026-04-01 01:00:00 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:00:00.099655 | orchestrator | 2026-04-01 01:00:00 | INFO  | Task 3eab0b43-bc52-4bba-a38d-674d001ee96c is in state STARTED 2026-04-01 01:00:00.100363 | orchestrator | 2026-04-01 01:00:00 | INFO  | Task 3ae40d44-34a5-4253-b94f-2fde97cfc8a2 is in state SUCCESS 2026-04-01 01:00:00.100772 | orchestrator | 2026-04-01 01:00:00.100787 | orchestrator | 2026-04-01 01:00:00.100792 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-01 01:00:00.100797 | orchestrator | 2026-04-01 01:00:00.100801 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-01 01:00:00.100806 | orchestrator | Wednesday 01 April 2026 00:58:48 +0000 (0:00:00.316) 0:00:00.316 ******* 2026-04-01 01:00:00.100824 | orchestrator | ok: [testbed-node-0] 2026-04-01 01:00:00.100829 | orchestrator | ok: [testbed-node-1] 2026-04-01 01:00:00.100833 | orchestrator | ok: [testbed-node-2] 2026-04-01 01:00:00.100836 | orchestrator | 2026-04-01 01:00:00.100840 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-01 01:00:00.100844 | orchestrator | Wednesday 01 April 2026 00:58:48 +0000 (0:00:00.291) 0:00:00.607 ******* 2026-04-01 01:00:00.100848 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-04-01 01:00:00.100852 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-04-01 01:00:00.100856 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-04-01 01:00:00.100860 | orchestrator | 2026-04-01 01:00:00.100864 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-04-01 01:00:00.100867 | orchestrator | 2026-04-01 01:00:00.100871 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-04-01 01:00:00.100875 | orchestrator | Wednesday 01 April 2026 00:58:48 +0000 (0:00:00.291) 0:00:00.898 ******* 2026-04-01 01:00:00.100879 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 01:00:00.100883 | orchestrator | 2026-04-01 01:00:00.100887 | orchestrator | TASK [service-ks-register : magnum | Creating/deleting services] *************** 2026-04-01 01:00:00.100924 | orchestrator | Wednesday 01 April 2026 00:58:49 +0000 (0:00:00.630) 0:00:01.529 ******* 2026-04-01 01:00:00.100928 | orchestrator | FAILED - RETRYING: [testbed-node-0]: magnum | Creating/deleting services (5 retries left). 2026-04-01 01:00:00.100932 | orchestrator | FAILED - RETRYING: [testbed-node-0]: magnum | Creating/deleting services (4 retries left). 2026-04-01 01:00:00.100937 | orchestrator | FAILED - RETRYING: [testbed-node-0]: magnum | Creating/deleting services (3 retries left). 2026-04-01 01:00:00.100941 | orchestrator | FAILED - RETRYING: [testbed-node-0]: magnum | Creating/deleting services (2 retries left). 2026-04-01 01:00:00.100944 | orchestrator | FAILED - RETRYING: [testbed-node-0]: magnum | Creating/deleting services (1 retries left). 2026-04-01 01:00:00.100950 | orchestrator | failed: [testbed-node-0] (item=magnum (container-infra)) => {"ansible_loop_var": "item", "attempts": 5, "changed": false, "item": {"description": "Container Infrastructure Management Service", "endpoints": [{"interface": "internal", "url": "https://api-int.testbed.osism.xyz:9511/v1"}, {"interface": "public", "url": "https://api.testbed.osism.xyz:9511/v1"}], "name": "magnum", "type": "container-infra"}, "msg": "kolla_toolbox container is missing or not running!"} 2026-04-01 01:00:00.100956 | orchestrator | 2026-04-01 01:00:00.100960 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 01:00:00.100964 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2026-04-01 01:00:00.100969 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 01:00:00.100974 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 01:00:00.100977 | orchestrator | 2026-04-01 01:00:00.100981 | orchestrator | 2026-04-01 01:00:00.100985 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 01:00:00.100996 | orchestrator | Wednesday 01 April 2026 00:59:42 +0000 (0:00:53.089) 0:00:54.618 ******* 2026-04-01 01:00:00.101000 | orchestrator | =============================================================================== 2026-04-01 01:00:00.101004 | orchestrator | service-ks-register : magnum | Creating/deleting services -------------- 53.09s 2026-04-01 01:00:00.101008 | orchestrator | magnum : include_tasks -------------------------------------------------- 0.63s 2026-04-01 01:00:00.101012 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.29s 2026-04-01 01:00:00.101015 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.29s 2026-04-01 01:00:00.101023 | orchestrator | 2026-04-01 01:00:00.101027 | orchestrator | 2026-04-01 01:00:00.101031 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-01 01:00:00.101035 | orchestrator | 2026-04-01 01:00:00.101039 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-01 01:00:00.101042 | orchestrator | Wednesday 01 April 2026 00:58:48 +0000 (0:00:00.319) 0:00:00.319 ******* 2026-04-01 01:00:00.101046 | orchestrator | ok: [testbed-node-0] 2026-04-01 01:00:00.101050 | orchestrator | ok: [testbed-node-1] 2026-04-01 01:00:00.101054 | orchestrator | ok: [testbed-node-2] 2026-04-01 01:00:00.101058 | orchestrator | 2026-04-01 01:00:00.101062 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-01 01:00:00.101072 | orchestrator | Wednesday 01 April 2026 00:58:48 +0000 (0:00:00.288) 0:00:00.607 ******* 2026-04-01 01:00:00.101080 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-04-01 01:00:00.101084 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-04-01 01:00:00.101088 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-04-01 01:00:00.101092 | orchestrator | 2026-04-01 01:00:00.101096 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-04-01 01:00:00.101100 | orchestrator | 2026-04-01 01:00:00.101111 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-04-01 01:00:00.101115 | orchestrator | Wednesday 01 April 2026 00:58:48 +0000 (0:00:00.279) 0:00:00.886 ******* 2026-04-01 01:00:00.101119 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 01:00:00.101123 | orchestrator | 2026-04-01 01:00:00.101127 | orchestrator | TASK [service-ks-register : placement | Creating/deleting services] ************ 2026-04-01 01:00:00.101130 | orchestrator | Wednesday 01 April 2026 00:58:49 +0000 (0:00:00.629) 0:00:01.516 ******* 2026-04-01 01:00:00.101134 | orchestrator | FAILED - RETRYING: [testbed-node-0]: placement | Creating/deleting services (5 retries left). 2026-04-01 01:00:00.101138 | orchestrator | FAILED - RETRYING: [testbed-node-0]: placement | Creating/deleting services (4 retries left). 2026-04-01 01:00:00.101142 | orchestrator | FAILED - RETRYING: [testbed-node-0]: placement | Creating/deleting services (3 retries left). 2026-04-01 01:00:00.101146 | orchestrator | FAILED - RETRYING: [testbed-node-0]: placement | Creating/deleting services (2 retries left). 2026-04-01 01:00:00.101150 | orchestrator | FAILED - RETRYING: [testbed-node-0]: placement | Creating/deleting services (1 retries left). 2026-04-01 01:00:00.101155 | orchestrator | failed: [testbed-node-0] (item=placement (placement)) => {"ansible_loop_var": "item", "attempts": 5, "changed": false, "item": {"description": "Placement Service", "endpoints": [{"interface": "internal", "url": "https://api-int.testbed.osism.xyz:8780"}, {"interface": "public", "url": "https://api.testbed.osism.xyz:8780"}], "name": "placement", "type": "placement"}, "msg": "kolla_toolbox container is missing or not running!"} 2026-04-01 01:00:00.101159 | orchestrator | 2026-04-01 01:00:00.101163 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 01:00:00.101167 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2026-04-01 01:00:00.101171 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 01:00:00.101175 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 01:00:00.101178 | orchestrator | 2026-04-01 01:00:00.101182 | orchestrator | 2026-04-01 01:00:00.101186 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 01:00:00.101190 | orchestrator | Wednesday 01 April 2026 00:59:42 +0000 (0:00:53.028) 0:00:54.544 ******* 2026-04-01 01:00:00.101194 | orchestrator | =============================================================================== 2026-04-01 01:00:00.101201 | orchestrator | service-ks-register : placement | Creating/deleting services ----------- 53.03s 2026-04-01 01:00:00.101205 | orchestrator | placement : include_tasks ----------------------------------------------- 0.63s 2026-04-01 01:00:00.101208 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.29s 2026-04-01 01:00:00.101212 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.28s 2026-04-01 01:00:00.101216 | orchestrator | 2026-04-01 01:00:00.101220 | orchestrator | 2026-04-01 01:00:00.101224 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2026-04-01 01:00:00.101228 | orchestrator | 2026-04-01 01:00:00.101232 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2026-04-01 01:00:00.101236 | orchestrator | Wednesday 01 April 2026 00:57:50 +0000 (0:00:00.157) 0:00:00.157 ******* 2026-04-01 01:00:00.101240 | orchestrator | changed: [localhost] 2026-04-01 01:00:00.101244 | orchestrator | 2026-04-01 01:00:00.101247 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2026-04-01 01:00:00.101253 | orchestrator | Wednesday 01 April 2026 00:57:51 +0000 (0:00:00.795) 0:00:00.953 ******* 2026-04-01 01:00:00.101258 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent initramfs (3 retries left). 2026-04-01 01:00:00.101261 | orchestrator | changed: [localhost] 2026-04-01 01:00:00.101265 | orchestrator | 2026-04-01 01:00:00.101269 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2026-04-01 01:00:00.101273 | orchestrator | Wednesday 01 April 2026 00:58:43 +0000 (0:00:52.490) 0:00:53.443 ******* 2026-04-01 01:00:00.101277 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent kernel (3 retries left). 2026-04-01 01:00:00.101281 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent kernel (2 retries left). 2026-04-01 01:00:00.101285 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent kernel (1 retries left). 2026-04-01 01:00:00.101290 | orchestrator | fatal: [localhost]: FAILED! => {"attempts": 3, "changed": false, "dest": "/share/ironic/ironic/ironic-agent.kernel", "elapsed": 10, "msg": "Request failed: ", "url": "https://tarballs.opendev.org/openstack/ironic-python-agent/dib/files/ipa-centos9-stable-2025.1.kernel.sha256"} 2026-04-01 01:00:00.101294 | orchestrator | 2026-04-01 01:00:00.101298 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 01:00:00.101302 | orchestrator | localhost : ok=2  changed=2  unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2026-04-01 01:00:00.101306 | orchestrator | 2026-04-01 01:00:00.101310 | orchestrator | 2026-04-01 01:00:00.101316 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 01:00:00.101321 | orchestrator | Wednesday 01 April 2026 00:59:58 +0000 (0:01:15.059) 0:02:08.503 ******* 2026-04-01 01:00:00.101324 | orchestrator | =============================================================================== 2026-04-01 01:00:00.101328 | orchestrator | Download ironic-agent kernel ------------------------------------------- 75.06s 2026-04-01 01:00:00.101332 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 52.49s 2026-04-01 01:00:00.101336 | orchestrator | Ensure the destination directory exists --------------------------------- 0.80s 2026-04-01 01:00:00.101340 | orchestrator | 2026-04-01 01:00:00 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:00:03.140133 | orchestrator | 2026-04-01 01:00:03 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:00:03.142350 | orchestrator | 2026-04-01 01:00:03 | INFO  | Task 3eab0b43-bc52-4bba-a38d-674d001ee96c is in state STARTED 2026-04-01 01:00:03.142391 | orchestrator | 2026-04-01 01:00:03 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:00:06.182558 | orchestrator | 2026-04-01 01:00:06 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:00:06.184507 | orchestrator | 2026-04-01 01:00:06 | INFO  | Task 3eab0b43-bc52-4bba-a38d-674d001ee96c is in state STARTED 2026-04-01 01:00:06.184562 | orchestrator | 2026-04-01 01:00:06 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:00:09.228796 | orchestrator | 2026-04-01 01:00:09 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:00:09.230157 | orchestrator | 2026-04-01 01:00:09 | INFO  | Task 3eab0b43-bc52-4bba-a38d-674d001ee96c is in state STARTED 2026-04-01 01:00:09.230223 | orchestrator | 2026-04-01 01:00:09 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:00:12.279452 | orchestrator | 2026-04-01 01:00:12 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:00:12.280659 | orchestrator | 2026-04-01 01:00:12 | INFO  | Task 3eab0b43-bc52-4bba-a38d-674d001ee96c is in state STARTED 2026-04-01 01:00:12.280749 | orchestrator | 2026-04-01 01:00:12 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:00:15.327562 | orchestrator | 2026-04-01 01:00:15 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:00:15.329918 | orchestrator | 2026-04-01 01:00:15 | INFO  | Task 3eab0b43-bc52-4bba-a38d-674d001ee96c is in state STARTED 2026-04-01 01:00:15.330070 | orchestrator | 2026-04-01 01:00:15 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:00:18.379944 | orchestrator | 2026-04-01 01:00:18 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:00:18.381623 | orchestrator | 2026-04-01 01:00:18 | INFO  | Task 3eab0b43-bc52-4bba-a38d-674d001ee96c is in state STARTED 2026-04-01 01:00:18.381684 | orchestrator | 2026-04-01 01:00:18 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:00:21.419410 | orchestrator | 2026-04-01 01:00:21 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:00:21.421036 | orchestrator | 2026-04-01 01:00:21 | INFO  | Task 3eab0b43-bc52-4bba-a38d-674d001ee96c is in state STARTED 2026-04-01 01:00:21.421091 | orchestrator | 2026-04-01 01:00:21 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:00:24.464669 | orchestrator | 2026-04-01 01:00:24 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:00:24.466563 | orchestrator | 2026-04-01 01:00:24 | INFO  | Task 3eab0b43-bc52-4bba-a38d-674d001ee96c is in state STARTED 2026-04-01 01:00:24.466622 | orchestrator | 2026-04-01 01:00:24 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:00:27.514816 | orchestrator | 2026-04-01 01:00:27 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:00:27.516554 | orchestrator | 2026-04-01 01:00:27 | INFO  | Task 3eab0b43-bc52-4bba-a38d-674d001ee96c is in state STARTED 2026-04-01 01:00:27.516598 | orchestrator | 2026-04-01 01:00:27 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:00:30.560332 | orchestrator | 2026-04-01 01:00:30 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:00:30.563599 | orchestrator | 2026-04-01 01:00:30 | INFO  | Task 3eab0b43-bc52-4bba-a38d-674d001ee96c is in state STARTED 2026-04-01 01:00:30.563667 | orchestrator | 2026-04-01 01:00:30 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:00:33.609994 | orchestrator | 2026-04-01 01:00:33 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:00:33.611119 | orchestrator | 2026-04-01 01:00:33 | INFO  | Task 3eab0b43-bc52-4bba-a38d-674d001ee96c is in state STARTED 2026-04-01 01:00:33.611175 | orchestrator | 2026-04-01 01:00:33 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:00:36.655327 | orchestrator | 2026-04-01 01:00:36 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:00:36.657526 | orchestrator | 2026-04-01 01:00:36 | INFO  | Task 3eab0b43-bc52-4bba-a38d-674d001ee96c is in state STARTED 2026-04-01 01:00:36.657692 | orchestrator | 2026-04-01 01:00:36 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:00:39.704995 | orchestrator | 2026-04-01 01:00:39 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:00:39.706483 | orchestrator | 2026-04-01 01:00:39 | INFO  | Task 3eab0b43-bc52-4bba-a38d-674d001ee96c is in state STARTED 2026-04-01 01:00:39.706529 | orchestrator | 2026-04-01 01:00:39 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:00:42.750056 | orchestrator | 2026-04-01 01:00:42 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:00:42.752743 | orchestrator | 2026-04-01 01:00:42 | INFO  | Task 3eab0b43-bc52-4bba-a38d-674d001ee96c is in state STARTED 2026-04-01 01:00:42.752812 | orchestrator | 2026-04-01 01:00:42 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:00:45.793068 | orchestrator | 2026-04-01 01:00:45 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:00:45.794628 | orchestrator | 2026-04-01 01:00:45 | INFO  | Task 3eab0b43-bc52-4bba-a38d-674d001ee96c is in state STARTED 2026-04-01 01:00:45.794796 | orchestrator | 2026-04-01 01:00:45 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:00:48.837212 | orchestrator | 2026-04-01 01:00:48 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:00:48.839184 | orchestrator | 2026-04-01 01:00:48 | INFO  | Task 3eab0b43-bc52-4bba-a38d-674d001ee96c is in state STARTED 2026-04-01 01:00:48.839299 | orchestrator | 2026-04-01 01:00:48 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:00:51.882257 | orchestrator | 2026-04-01 01:00:51 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:00:51.884575 | orchestrator | 2026-04-01 01:00:51 | INFO  | Task 3eab0b43-bc52-4bba-a38d-674d001ee96c is in state STARTED 2026-04-01 01:00:51.884654 | orchestrator | 2026-04-01 01:00:51 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:00:54.926147 | orchestrator | 2026-04-01 01:00:54 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:00:54.927556 | orchestrator | 2026-04-01 01:00:54 | INFO  | Task 3eab0b43-bc52-4bba-a38d-674d001ee96c is in state STARTED 2026-04-01 01:00:54.927592 | orchestrator | 2026-04-01 01:00:54 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:00:57.968844 | orchestrator | 2026-04-01 01:00:57 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:00:57.971718 | orchestrator | 2026-04-01 01:00:57 | INFO  | Task 3eab0b43-bc52-4bba-a38d-674d001ee96c is in state STARTED 2026-04-01 01:00:57.971781 | orchestrator | 2026-04-01 01:00:57 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:01:01.013269 | orchestrator | 2026-04-01 01:01:01 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:01:01.015253 | orchestrator | 2026-04-01 01:01:01 | INFO  | Task 3eab0b43-bc52-4bba-a38d-674d001ee96c is in state STARTED 2026-04-01 01:01:01.015318 | orchestrator | 2026-04-01 01:01:01 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:01:04.060778 | orchestrator | 2026-04-01 01:01:04 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:01:04.064091 | orchestrator | 2026-04-01 01:01:04 | INFO  | Task 3eab0b43-bc52-4bba-a38d-674d001ee96c is in state STARTED 2026-04-01 01:01:04.064202 | orchestrator | 2026-04-01 01:01:04 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:01:07.119084 | orchestrator | 2026-04-01 01:01:07 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:01:07.121088 | orchestrator | 2026-04-01 01:01:07 | INFO  | Task 3eab0b43-bc52-4bba-a38d-674d001ee96c is in state STARTED 2026-04-01 01:01:07.121135 | orchestrator | 2026-04-01 01:01:07 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:01:10.156931 | orchestrator | 2026-04-01 01:01:10 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:01:10.158834 | orchestrator | 2026-04-01 01:01:10 | INFO  | Task 3eab0b43-bc52-4bba-a38d-674d001ee96c is in state STARTED 2026-04-01 01:01:10.158860 | orchestrator | 2026-04-01 01:01:10 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:01:13.205409 | orchestrator | 2026-04-01 01:01:13 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:01:13.207127 | orchestrator | 2026-04-01 01:01:13 | INFO  | Task 3eab0b43-bc52-4bba-a38d-674d001ee96c is in state STARTED 2026-04-01 01:01:13.207282 | orchestrator | 2026-04-01 01:01:13 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:01:16.248065 | orchestrator | 2026-04-01 01:01:16 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:01:16.250271 | orchestrator | 2026-04-01 01:01:16 | INFO  | Task 3eab0b43-bc52-4bba-a38d-674d001ee96c is in state STARTED 2026-04-01 01:01:16.250319 | orchestrator | 2026-04-01 01:01:16 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:01:19.287546 | orchestrator | 2026-04-01 01:01:19 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:01:19.289357 | orchestrator | 2026-04-01 01:01:19 | INFO  | Task 3eab0b43-bc52-4bba-a38d-674d001ee96c is in state STARTED 2026-04-01 01:01:19.289416 | orchestrator | 2026-04-01 01:01:19 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:01:22.322811 | orchestrator | 2026-04-01 01:01:22 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:01:22.324137 | orchestrator | 2026-04-01 01:01:22 | INFO  | Task 3eab0b43-bc52-4bba-a38d-674d001ee96c is in state STARTED 2026-04-01 01:01:22.324189 | orchestrator | 2026-04-01 01:01:22 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:01:25.363649 | orchestrator | 2026-04-01 01:01:25 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:01:25.367684 | orchestrator | 2026-04-01 01:01:25 | INFO  | Task 3eab0b43-bc52-4bba-a38d-674d001ee96c is in state SUCCESS 2026-04-01 01:01:25.369430 | orchestrator | 2026-04-01 01:01:25.369562 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-01 01:01:25.369578 | orchestrator | 2.16.14 2026-04-01 01:01:25.369585 | orchestrator | 2026-04-01 01:01:25.369592 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-04-01 01:01:25.369600 | orchestrator | 2026-04-01 01:01:25.369607 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-01 01:01:25.369652 | orchestrator | Wednesday 01 April 2026 00:59:27 +0000 (0:00:00.496) 0:00:00.496 ******* 2026-04-01 01:01:25.369659 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 01:01:25.369694 | orchestrator | 2026-04-01 01:01:25.369701 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-01 01:01:25.369843 | orchestrator | Wednesday 01 April 2026 00:59:28 +0000 (0:00:00.467) 0:00:00.964 ******* 2026-04-01 01:01:25.369851 | orchestrator | ok: [testbed-node-4] 2026-04-01 01:01:25.369857 | orchestrator | ok: [testbed-node-5] 2026-04-01 01:01:25.369864 | orchestrator | ok: [testbed-node-3] 2026-04-01 01:01:25.369896 | orchestrator | 2026-04-01 01:01:25.369903 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-01 01:01:25.369909 | orchestrator | Wednesday 01 April 2026 00:59:29 +0000 (0:00:00.939) 0:00:01.903 ******* 2026-04-01 01:01:25.369916 | orchestrator | ok: [testbed-node-3] 2026-04-01 01:01:25.369921 | orchestrator | ok: [testbed-node-4] 2026-04-01 01:01:25.369927 | orchestrator | ok: [testbed-node-5] 2026-04-01 01:01:25.369933 | orchestrator | 2026-04-01 01:01:25.369938 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-01 01:01:25.369945 | orchestrator | Wednesday 01 April 2026 00:59:29 +0000 (0:00:00.262) 0:00:02.165 ******* 2026-04-01 01:01:25.369951 | orchestrator | ok: [testbed-node-3] 2026-04-01 01:01:25.369967 | orchestrator | ok: [testbed-node-4] 2026-04-01 01:01:25.369973 | orchestrator | ok: [testbed-node-5] 2026-04-01 01:01:25.369979 | orchestrator | 2026-04-01 01:01:25.369985 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-01 01:01:25.369991 | orchestrator | Wednesday 01 April 2026 00:59:30 +0000 (0:00:00.848) 0:00:03.013 ******* 2026-04-01 01:01:25.369997 | orchestrator | ok: [testbed-node-3] 2026-04-01 01:01:25.370002 | orchestrator | ok: [testbed-node-4] 2026-04-01 01:01:25.370008 | orchestrator | ok: [testbed-node-5] 2026-04-01 01:01:25.370047 | orchestrator | 2026-04-01 01:01:25.370056 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-01 01:01:25.370062 | orchestrator | Wednesday 01 April 2026 00:59:30 +0000 (0:00:00.257) 0:00:03.271 ******* 2026-04-01 01:01:25.370068 | orchestrator | ok: [testbed-node-3] 2026-04-01 01:01:25.370074 | orchestrator | ok: [testbed-node-4] 2026-04-01 01:01:25.370080 | orchestrator | ok: [testbed-node-5] 2026-04-01 01:01:25.370086 | orchestrator | 2026-04-01 01:01:25.370092 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-01 01:01:25.370097 | orchestrator | Wednesday 01 April 2026 00:59:30 +0000 (0:00:00.240) 0:00:03.511 ******* 2026-04-01 01:01:25.370102 | orchestrator | ok: [testbed-node-3] 2026-04-01 01:01:25.370108 | orchestrator | ok: [testbed-node-4] 2026-04-01 01:01:25.370114 | orchestrator | ok: [testbed-node-5] 2026-04-01 01:01:25.370120 | orchestrator | 2026-04-01 01:01:25.370126 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-01 01:01:25.370132 | orchestrator | Wednesday 01 April 2026 00:59:31 +0000 (0:00:00.289) 0:00:03.800 ******* 2026-04-01 01:01:25.370138 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:01:25.370145 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:01:25.370151 | orchestrator | skipping: [testbed-node-5] 2026-04-01 01:01:25.370157 | orchestrator | 2026-04-01 01:01:25.370164 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-01 01:01:25.370170 | orchestrator | Wednesday 01 April 2026 00:59:31 +0000 (0:00:00.382) 0:00:04.182 ******* 2026-04-01 01:01:25.370176 | orchestrator | ok: [testbed-node-3] 2026-04-01 01:01:25.370183 | orchestrator | ok: [testbed-node-4] 2026-04-01 01:01:25.370188 | orchestrator | ok: [testbed-node-5] 2026-04-01 01:01:25.370194 | orchestrator | 2026-04-01 01:01:25.370200 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-01 01:01:25.370206 | orchestrator | Wednesday 01 April 2026 00:59:31 +0000 (0:00:00.277) 0:00:04.460 ******* 2026-04-01 01:01:25.370212 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-01 01:01:25.370243 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-01 01:01:25.370249 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-01 01:01:25.370255 | orchestrator | 2026-04-01 01:01:25.370273 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-01 01:01:25.370279 | orchestrator | Wednesday 01 April 2026 00:59:32 +0000 (0:00:00.589) 0:00:05.049 ******* 2026-04-01 01:01:25.370285 | orchestrator | ok: [testbed-node-3] 2026-04-01 01:01:25.370292 | orchestrator | ok: [testbed-node-4] 2026-04-01 01:01:25.370297 | orchestrator | ok: [testbed-node-5] 2026-04-01 01:01:25.370315 | orchestrator | 2026-04-01 01:01:25.370322 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-01 01:01:25.370327 | orchestrator | Wednesday 01 April 2026 00:59:32 +0000 (0:00:00.353) 0:00:05.402 ******* 2026-04-01 01:01:25.370333 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-01 01:01:25.370339 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-01 01:01:25.370344 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-01 01:01:25.370350 | orchestrator | 2026-04-01 01:01:25.370356 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-01 01:01:25.370362 | orchestrator | Wednesday 01 April 2026 00:59:35 +0000 (0:00:02.886) 0:00:08.289 ******* 2026-04-01 01:01:25.370368 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-01 01:01:25.370374 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-01 01:01:25.370380 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-01 01:01:25.370386 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:01:25.370392 | orchestrator | 2026-04-01 01:01:25.370413 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-01 01:01:25.370419 | orchestrator | Wednesday 01 April 2026 00:59:35 +0000 (0:00:00.408) 0:00:08.698 ******* 2026-04-01 01:01:25.370428 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-01 01:01:25.370437 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-01 01:01:25.370443 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-01 01:01:25.370450 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:01:25.370455 | orchestrator | 2026-04-01 01:01:25.370462 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-01 01:01:25.370469 | orchestrator | Wednesday 01 April 2026 00:59:36 +0000 (0:00:00.856) 0:00:09.554 ******* 2026-04-01 01:01:25.370485 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-01 01:01:25.370495 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-01 01:01:25.370502 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-01 01:01:25.370507 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:01:25.370513 | orchestrator | 2026-04-01 01:01:25.370519 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-01 01:01:25.370532 | orchestrator | Wednesday 01 April 2026 00:59:36 +0000 (0:00:00.146) 0:00:09.701 ******* 2026-04-01 01:01:25.370540 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'ec450c679239', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-01 00:59:33.541996', 'end': '2026-04-01 00:59:33.584680', 'delta': '0:00:00.042684', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['ec450c679239'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-01 01:01:25.370551 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '327320316fff', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-01 00:59:34.590175', 'end': '2026-04-01 00:59:34.627713', 'delta': '0:00:00.037538', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['327320316fff'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-01 01:01:25.370565 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'de8d26d9c190', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-01 00:59:35.398012', 'end': '2026-04-01 00:59:35.432498', 'delta': '0:00:00.034486', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['de8d26d9c190'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-01 01:01:25.370572 | orchestrator | 2026-04-01 01:01:25.370579 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-01 01:01:25.370585 | orchestrator | Wednesday 01 April 2026 00:59:37 +0000 (0:00:00.341) 0:00:10.043 ******* 2026-04-01 01:01:25.370591 | orchestrator | ok: [testbed-node-3] 2026-04-01 01:01:25.370596 | orchestrator | ok: [testbed-node-4] 2026-04-01 01:01:25.370603 | orchestrator | ok: [testbed-node-5] 2026-04-01 01:01:25.370610 | orchestrator | 2026-04-01 01:01:25.370617 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-01 01:01:25.370622 | orchestrator | Wednesday 01 April 2026 00:59:37 +0000 (0:00:00.433) 0:00:10.476 ******* 2026-04-01 01:01:25.370628 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-04-01 01:01:25.370638 | orchestrator | 2026-04-01 01:01:25.370644 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-01 01:01:25.370650 | orchestrator | Wednesday 01 April 2026 00:59:39 +0000 (0:00:01.805) 0:00:12.282 ******* 2026-04-01 01:01:25.370657 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:01:25.370663 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:01:25.370669 | orchestrator | skipping: [testbed-node-5] 2026-04-01 01:01:25.370676 | orchestrator | 2026-04-01 01:01:25.370682 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-01 01:01:25.370689 | orchestrator | Wednesday 01 April 2026 00:59:39 +0000 (0:00:00.296) 0:00:12.579 ******* 2026-04-01 01:01:25.370694 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:01:25.370707 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:01:25.370713 | orchestrator | skipping: [testbed-node-5] 2026-04-01 01:01:25.370719 | orchestrator | 2026-04-01 01:01:25.370725 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-01 01:01:25.370729 | orchestrator | Wednesday 01 April 2026 00:59:40 +0000 (0:00:00.381) 0:00:12.960 ******* 2026-04-01 01:01:25.370733 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:01:25.370737 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:01:25.370741 | orchestrator | skipping: [testbed-node-5] 2026-04-01 01:01:25.370744 | orchestrator | 2026-04-01 01:01:25.370748 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-01 01:01:25.370752 | orchestrator | Wednesday 01 April 2026 00:59:40 +0000 (0:00:00.492) 0:00:13.452 ******* 2026-04-01 01:01:25.370756 | orchestrator | ok: [testbed-node-3] 2026-04-01 01:01:25.370760 | orchestrator | 2026-04-01 01:01:25.370764 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-01 01:01:25.370768 | orchestrator | Wednesday 01 April 2026 00:59:40 +0000 (0:00:00.142) 0:00:13.594 ******* 2026-04-01 01:01:25.370772 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:01:25.370776 | orchestrator | 2026-04-01 01:01:25.370780 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-01 01:01:25.370783 | orchestrator | Wednesday 01 April 2026 00:59:41 +0000 (0:00:00.217) 0:00:13.812 ******* 2026-04-01 01:01:25.370787 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:01:25.370791 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:01:25.370795 | orchestrator | skipping: [testbed-node-5] 2026-04-01 01:01:25.370799 | orchestrator | 2026-04-01 01:01:25.370803 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-01 01:01:25.370807 | orchestrator | Wednesday 01 April 2026 00:59:41 +0000 (0:00:00.272) 0:00:14.084 ******* 2026-04-01 01:01:25.370811 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:01:25.370814 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:01:25.370818 | orchestrator | skipping: [testbed-node-5] 2026-04-01 01:01:25.370822 | orchestrator | 2026-04-01 01:01:25.370826 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-01 01:01:25.370830 | orchestrator | Wednesday 01 April 2026 00:59:41 +0000 (0:00:00.304) 0:00:14.388 ******* 2026-04-01 01:01:25.370834 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:01:25.370838 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:01:25.370841 | orchestrator | skipping: [testbed-node-5] 2026-04-01 01:01:25.370845 | orchestrator | 2026-04-01 01:01:25.370849 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-01 01:01:25.370853 | orchestrator | Wednesday 01 April 2026 00:59:42 +0000 (0:00:00.428) 0:00:14.817 ******* 2026-04-01 01:01:25.370857 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:01:25.370861 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:01:25.370864 | orchestrator | skipping: [testbed-node-5] 2026-04-01 01:01:25.370868 | orchestrator | 2026-04-01 01:01:25.370872 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-01 01:01:25.370876 | orchestrator | Wednesday 01 April 2026 00:59:42 +0000 (0:00:00.267) 0:00:15.084 ******* 2026-04-01 01:01:25.370880 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:01:25.370884 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:01:25.370887 | orchestrator | skipping: [testbed-node-5] 2026-04-01 01:01:25.370891 | orchestrator | 2026-04-01 01:01:25.370895 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-01 01:01:25.370899 | orchestrator | Wednesday 01 April 2026 00:59:42 +0000 (0:00:00.301) 0:00:15.385 ******* 2026-04-01 01:01:25.370903 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:01:25.370906 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:01:25.370910 | orchestrator | skipping: [testbed-node-5] 2026-04-01 01:01:25.370918 | orchestrator | 2026-04-01 01:01:25.370922 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-01 01:01:25.370926 | orchestrator | Wednesday 01 April 2026 00:59:42 +0000 (0:00:00.279) 0:00:15.664 ******* 2026-04-01 01:01:25.370942 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:01:25.370948 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:01:25.370953 | orchestrator | skipping: [testbed-node-5] 2026-04-01 01:01:25.370959 | orchestrator | 2026-04-01 01:01:25.370965 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-01 01:01:25.370972 | orchestrator | Wednesday 01 April 2026 00:59:43 +0000 (0:00:00.406) 0:00:16.070 ******* 2026-04-01 01:01:25.370978 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--070a6fcd--e232--5822--bdac--2856eb469583-osd--block--070a6fcd--e232--5822--bdac--2856eb469583', 'dm-uuid-LVM-XVYMn3IN00mdi6EnfVkPlw256qq9nI7912VpaCpkpbqfuvPtEYrqcEyji9q53KBz'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-01 01:01:25.370987 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--24dba708--820d--5543--af14--6cbe38251993-osd--block--24dba708--820d--5543--af14--6cbe38251993', 'dm-uuid-LVM-JQL58WVQQeGdBvo3KJNSREIYwthU36Keczsc7QaX34X6TCp6mDZGh2SdZgOENJGL'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-01 01:01:25.370991 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 01:01:25.370996 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 01:01:25.371000 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 01:01:25.371004 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 01:01:25.371008 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 01:01:25.371016 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 01:01:25.371024 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 01:01:25.371028 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 01:01:25.371036 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2a4f4914-3be5-4bda-a47c-01d9519cb486', 'scsi-SQEMU_QEMU_HARDDISK_2a4f4914-3be5-4bda-a47c-01d9519cb486'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2a4f4914-3be5-4bda-a47c-01d9519cb486-part1', 'scsi-SQEMU_QEMU_HARDDISK_2a4f4914-3be5-4bda-a47c-01d9519cb486-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2a4f4914-3be5-4bda-a47c-01d9519cb486-part14', 'scsi-SQEMU_QEMU_HARDDISK_2a4f4914-3be5-4bda-a47c-01d9519cb486-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2a4f4914-3be5-4bda-a47c-01d9519cb486-part15', 'scsi-SQEMU_QEMU_HARDDISK_2a4f4914-3be5-4bda-a47c-01d9519cb486-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2a4f4914-3be5-4bda-a47c-01d9519cb486-part16', 'scsi-SQEMU_QEMU_HARDDISK_2a4f4914-3be5-4bda-a47c-01d9519cb486-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-01 01:01:25.371042 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--00bcfd13--59f0--54da--b43f--34edf6af7c7d-osd--block--00bcfd13--59f0--54da--b43f--34edf6af7c7d', 'dm-uuid-LVM-iMX0SsshsPQVLScJsBqh3Uii0sRvXBeOCIRYfrnn2E2CJid3H0gSkexslogBax5C'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-01 01:01:25.371055 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--070a6fcd--e232--5822--bdac--2856eb469583-osd--block--070a6fcd--e232--5822--bdac--2856eb469583'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-j3cUEk-BjBv-qffa-yDut-NG4M-uRvZ-xxhpE2', 'scsi-0QEMU_QEMU_HARDDISK_181eb0d3-49bd-41f1-8f26-95e9754c9896', 'scsi-SQEMU_QEMU_HARDDISK_181eb0d3-49bd-41f1-8f26-95e9754c9896'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-01 01:01:25.371061 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--24dba708--820d--5543--af14--6cbe38251993-osd--block--24dba708--820d--5543--af14--6cbe38251993'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-oMZbdy-hpNd-YpXd-F35t-13ZE-ubGA-klAIbY', 'scsi-0QEMU_QEMU_HARDDISK_91aabfbd-d205-4d26-bb68-6c75b4d02402', 'scsi-SQEMU_QEMU_HARDDISK_91aabfbd-d205-4d26-bb68-6c75b4d02402'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-01 01:01:25.371067 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2f8eedd5--4e35--5081--a67e--565e77fef082-osd--block--2f8eedd5--4e35--5081--a67e--565e77fef082', 'dm-uuid-LVM-AzMBHf9V42Lz4YPHKNHAEEsPuJnHRSdJoTXEpZXZJVDV0MamSFteceMneZc4yeoD'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-01 01:01:25.371072 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a922e28e-6911-40ed-8ea7-c2624142d8a1', 'scsi-SQEMU_QEMU_HARDDISK_a922e28e-6911-40ed-8ea7-c2624142d8a1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-01 01:01:25.371078 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 01:01:25.371083 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-01-00-03-46-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-01 01:01:25.371087 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 01:01:25.371094 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:01:25.371103 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 01:01:25.371107 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 01:01:25.371111 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 01:01:25.371118 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 01:01:25.371122 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 01:01:25.371126 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c7c10550--c1bc--5fe3--90d5--7d7a9167f51f-osd--block--c7c10550--c1bc--5fe3--90d5--7d7a9167f51f', 'dm-uuid-LVM-Jq2MIcpey21uNPOZEaO9KhTykiV3qU0ZJf4J3S8rWh1hJgZ67k96VkIqEvzh4OyU'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-01 01:01:25.371130 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 01:01:25.371134 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d3162267--511d--5f73--a1c4--60a47e452e5f-osd--block--d3162267--511d--5f73--a1c4--60a47e452e5f', 'dm-uuid-LVM-6XbyFf6QbhKgKGPUkVKGPbWJ8VbkkOv366W0EKFsdJAkWsCELrMi62mRphvQtkxR'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-01 01:01:25.371149 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bc455e6b-b8ba-47d8-ab01-6be8b039ad3d', 'scsi-SQEMU_QEMU_HARDDISK_bc455e6b-b8ba-47d8-ab01-6be8b039ad3d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bc455e6b-b8ba-47d8-ab01-6be8b039ad3d-part1', 'scsi-SQEMU_QEMU_HARDDISK_bc455e6b-b8ba-47d8-ab01-6be8b039ad3d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bc455e6b-b8ba-47d8-ab01-6be8b039ad3d-part14', 'scsi-SQEMU_QEMU_HARDDISK_bc455e6b-b8ba-47d8-ab01-6be8b039ad3d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bc455e6b-b8ba-47d8-ab01-6be8b039ad3d-part15', 'scsi-SQEMU_QEMU_HARDDISK_bc455e6b-b8ba-47d8-ab01-6be8b039ad3d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bc455e6b-b8ba-47d8-ab01-6be8b039ad3d-part16', 'scsi-SQEMU_QEMU_HARDDISK_bc455e6b-b8ba-47d8-ab01-6be8b039ad3d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-01 01:01:25.371155 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 01:01:25.371159 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--00bcfd13--59f0--54da--b43f--34edf6af7c7d-osd--block--00bcfd13--59f0--54da--b43f--34edf6af7c7d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-edqX2r-NIRK-P1Nk-DRh5-tSiQ-BYrO-Mo2mdM', 'scsi-0QEMU_QEMU_HARDDISK_1a9aff5c-ee70-4834-ada6-16d88406b9f4', 'scsi-SQEMU_QEMU_HARDDISK_1a9aff5c-ee70-4834-ada6-16d88406b9f4'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-01 01:01:25.371163 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 01:01:25.371171 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--2f8eedd5--4e35--5081--a67e--565e77fef082-osd--block--2f8eedd5--4e35--5081--a67e--565e77fef082'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-IxVIHV-3xe3-l3il-mVFL-Ev2H-4sn6-FPVpoS', 'scsi-0QEMU_QEMU_HARDDISK_93c245f0-d55e-41f5-879e-2175ba1dd005', 'scsi-SQEMU_QEMU_HARDDISK_93c245f0-d55e-41f5-879e-2175ba1dd005'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-01 01:01:25.371179 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 01:01:25.371184 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_289dd0c3-dff8-4236-9edf-8ec702693da7', 'scsi-SQEMU_QEMU_HARDDISK_289dd0c3-dff8-4236-9edf-8ec702693da7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-01 01:01:25.371191 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 01:01:25.371195 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-01-00-03-50-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-01 01:01:25.371199 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:01:25.371203 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 01:01:25.371207 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 01:01:25.371211 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 01:01:25.371242 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-01 01:01:25.371257 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bb71c8c2-ca64-4a55-b962-663cadefaf49', 'scsi-SQEMU_QEMU_HARDDISK_bb71c8c2-ca64-4a55-b962-663cadefaf49'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bb71c8c2-ca64-4a55-b962-663cadefaf49-part1', 'scsi-SQEMU_QEMU_HARDDISK_bb71c8c2-ca64-4a55-b962-663cadefaf49-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bb71c8c2-ca64-4a55-b962-663cadefaf49-part14', 'scsi-SQEMU_QEMU_HARDDISK_bb71c8c2-ca64-4a55-b962-663cadefaf49-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bb71c8c2-ca64-4a55-b962-663cadefaf49-part15', 'scsi-SQEMU_QEMU_HARDDISK_bb71c8c2-ca64-4a55-b962-663cadefaf49-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bb71c8c2-ca64-4a55-b962-663cadefaf49-part16', 'scsi-SQEMU_QEMU_HARDDISK_bb71c8c2-ca64-4a55-b962-663cadefaf49-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-01 01:01:25.371264 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--c7c10550--c1bc--5fe3--90d5--7d7a9167f51f-osd--block--c7c10550--c1bc--5fe3--90d5--7d7a9167f51f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-0Pgqrb-Y4oO-t51v-LUqF-Xfe4-tPEB-8uA0p8', 'scsi-0QEMU_QEMU_HARDDISK_e5794c61-1895-432b-bae0-e64b20adb363', 'scsi-SQEMU_QEMU_HARDDISK_e5794c61-1895-432b-bae0-e64b20adb363'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-01 01:01:25.371270 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--d3162267--511d--5f73--a1c4--60a47e452e5f-osd--block--d3162267--511d--5f73--a1c4--60a47e452e5f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-X6NvH4-s8a1-fThR-cuqO-gA38-WCiF-j7Gb9y', 'scsi-0QEMU_QEMU_HARDDISK_b090d968-077f-4316-a7cb-bda539f6db67', 'scsi-SQEMU_QEMU_HARDDISK_b090d968-077f-4316-a7cb-bda539f6db67'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-01 01:01:25.371280 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ac6b0a42-475e-47b3-b6b9-8775ae6256f7', 'scsi-SQEMU_QEMU_HARDDISK_ac6b0a42-475e-47b3-b6b9-8775ae6256f7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-01 01:01:25.371290 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-01-00-03-43-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-01 01:01:25.371294 | orchestrator | skipping: [testbed-node-5] 2026-04-01 01:01:25.371298 | orchestrator | 2026-04-01 01:01:25.371302 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-01 01:01:25.371306 | orchestrator | Wednesday 01 April 2026 00:59:43 +0000 (0:00:00.469) 0:00:16.540 ******* 2026-04-01 01:01:25.371314 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--070a6fcd--e232--5822--bdac--2856eb469583-osd--block--070a6fcd--e232--5822--bdac--2856eb469583', 'dm-uuid-LVM-XVYMn3IN00mdi6EnfVkPlw256qq9nI7912VpaCpkpbqfuvPtEYrqcEyji9q53KBz'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 01:01:25.371320 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--24dba708--820d--5543--af14--6cbe38251993-osd--block--24dba708--820d--5543--af14--6cbe38251993', 'dm-uuid-LVM-JQL58WVQQeGdBvo3KJNSREIYwthU36Keczsc7QaX34X6TCp6mDZGh2SdZgOENJGL'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 01:01:25.371324 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 01:01:25.371331 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 01:01:25.371335 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 01:01:25.371344 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 01:01:25.371349 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 01:01:25.371356 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 01:01:25.371360 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--00bcfd13--59f0--54da--b43f--34edf6af7c7d-osd--block--00bcfd13--59f0--54da--b43f--34edf6af7c7d', 'dm-uuid-LVM-iMX0SsshsPQVLScJsBqh3Uii0sRvXBeOCIRYfrnn2E2CJid3H0gSkexslogBax5C'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 01:01:25.371367 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 01:01:25.371371 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2f8eedd5--4e35--5081--a67e--565e77fef082-osd--block--2f8eedd5--4e35--5081--a67e--565e77fef082', 'dm-uuid-LVM-AzMBHf9V42Lz4YPHKNHAEEsPuJnHRSdJoTXEpZXZJVDV0MamSFteceMneZc4yeoD'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 01:01:25.371378 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 01:01:25.371382 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 01:01:25.371390 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2a4f4914-3be5-4bda-a47c-01d9519cb486', 'scsi-SQEMU_QEMU_HARDDISK_2a4f4914-3be5-4bda-a47c-01d9519cb486'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2a4f4914-3be5-4bda-a47c-01d9519cb486-part1', 'scsi-SQEMU_QEMU_HARDDISK_2a4f4914-3be5-4bda-a47c-01d9519cb486-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2a4f4914-3be5-4bda-a47c-01d9519cb486-part14', 'scsi-SQEMU_QEMU_HARDDISK_2a4f4914-3be5-4bda-a47c-01d9519cb486-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2a4f4914-3be5-4bda-a47c-01d9519cb486-part15', 'scsi-SQEMU_QEMU_HARDDISK_2a4f4914-3be5-4bda-a47c-01d9519cb486-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2a4f4914-3be5-4bda-a47c-01d9519cb486-part16', 'scsi-SQEMU_QEMU_HARDDISK_2a4f4914-3be5-4bda-a47c-01d9519cb486-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 01:01:25.371399 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 01:01:25.371407 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--070a6fcd--e232--5822--bdac--2856eb469583-osd--block--070a6fcd--e232--5822--bdac--2856eb469583'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-j3cUEk-BjBv-qffa-yDut-NG4M-uRvZ-xxhpE2', 'scsi-0QEMU_QEMU_HARDDISK_181eb0d3-49bd-41f1-8f26-95e9754c9896', 'scsi-SQEMU_QEMU_HARDDISK_181eb0d3-49bd-41f1-8f26-95e9754c9896'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 01:01:25.371414 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 01:01:25.371418 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--24dba708--820d--5543--af14--6cbe38251993-osd--block--24dba708--820d--5543--af14--6cbe38251993'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-oMZbdy-hpNd-YpXd-F35t-13ZE-ubGA-klAIbY', 'scsi-0QEMU_QEMU_HARDDISK_91aabfbd-d205-4d26-bb68-6c75b4d02402', 'scsi-SQEMU_QEMU_HARDDISK_91aabfbd-d205-4d26-bb68-6c75b4d02402'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 01:01:25.371426 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a922e28e-6911-40ed-8ea7-c2624142d8a1', 'scsi-SQEMU_QEMU_HARDDISK_a922e28e-6911-40ed-8ea7-c2624142d8a1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 01:01:25.371431 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 01:01:25.371439 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-01-00-03-46-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 01:01:25.371443 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 01:01:25.371447 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:01:25.371454 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 01:01:25.371459 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 01:01:25.371466 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 01:01:25.371475 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bc455e6b-b8ba-47d8-ab01-6be8b039ad3d', 'scsi-SQEMU_QEMU_HARDDISK_bc455e6b-b8ba-47d8-ab01-6be8b039ad3d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bc455e6b-b8ba-47d8-ab01-6be8b039ad3d-part1', 'scsi-SQEMU_QEMU_HARDDISK_bc455e6b-b8ba-47d8-ab01-6be8b039ad3d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bc455e6b-b8ba-47d8-ab01-6be8b039ad3d-part14', 'scsi-SQEMU_QEMU_HARDDISK_bc455e6b-b8ba-47d8-ab01-6be8b039ad3d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bc455e6b-b8ba-47d8-ab01-6be8b039ad3d-part15', 'scsi-SQEMU_QEMU_HARDDISK_bc455e6b-b8ba-47d8-ab01-6be8b039ad3d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bc455e6b-b8ba-47d8-ab01-6be8b039ad3d-part16', 'scsi-SQEMU_QEMU_HARDDISK_bc455e6b-b8ba-47d8-ab01-6be8b039ad3d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 01:01:25.371483 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--00bcfd13--59f0--54da--b43f--34edf6af7c7d-osd--block--00bcfd13--59f0--54da--b43f--34edf6af7c7d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-edqX2r-NIRK-P1Nk-DRh5-tSiQ-BYrO-Mo2mdM', 'scsi-0QEMU_QEMU_HARDDISK_1a9aff5c-ee70-4834-ada6-16d88406b9f4', 'scsi-SQEMU_QEMU_HARDDISK_1a9aff5c-ee70-4834-ada6-16d88406b9f4'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 01:01:25.371490 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c7c10550--c1bc--5fe3--90d5--7d7a9167f51f-osd--block--c7c10550--c1bc--5fe3--90d5--7d7a9167f51f', 'dm-uuid-LVM-Jq2MIcpey21uNPOZEaO9KhTykiV3qU0ZJf4J3S8rWh1hJgZ67k96VkIqEvzh4OyU'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 01:01:25.371824 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--2f8eedd5--4e35--5081--a67e--565e77fef082-osd--block--2f8eedd5--4e35--5081--a67e--565e77fef082'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-IxVIHV-3xe3-l3il-mVFL-Ev2H-4sn6-FPVpoS', 'scsi-0QEMU_QEMU_HARDDISK_93c245f0-d55e-41f5-879e-2175ba1dd005', 'scsi-SQEMU_QEMU_HARDDISK_93c245f0-d55e-41f5-879e-2175ba1dd005'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 01:01:25.371848 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d3162267--511d--5f73--a1c4--60a47e452e5f-osd--block--d3162267--511d--5f73--a1c4--60a47e452e5f', 'dm-uuid-LVM-6XbyFf6QbhKgKGPUkVKGPbWJ8VbkkOv366W0EKFsdJAkWsCELrMi62mRphvQtkxR'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 01:01:25.371863 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_289dd0c3-dff8-4236-9edf-8ec702693da7', 'scsi-SQEMU_QEMU_HARDDISK_289dd0c3-dff8-4236-9edf-8ec702693da7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 01:01:25.371869 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 01:01:25.371880 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-01-00-03-50-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 01:01:25.371884 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 01:01:25.371888 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:01:25.371898 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 01:01:25.371903 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 01:01:25.371907 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 01:01:25.371914 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 01:01:25.371925 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 01:01:25.371929 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 01:01:25.371937 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bb71c8c2-ca64-4a55-b962-663cadefaf49', 'scsi-SQEMU_QEMU_HARDDISK_bb71c8c2-ca64-4a55-b962-663cadefaf49'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bb71c8c2-ca64-4a55-b962-663cadefaf49-part1', 'scsi-SQEMU_QEMU_HARDDISK_bb71c8c2-ca64-4a55-b962-663cadefaf49-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bb71c8c2-ca64-4a55-b962-663cadefaf49-part14', 'scsi-SQEMU_QEMU_HARDDISK_bb71c8c2-ca64-4a55-b962-663cadefaf49-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bb71c8c2-ca64-4a55-b962-663cadefaf49-part15', 'scsi-SQEMU_QEMU_HARDDISK_bb71c8c2-ca64-4a55-b962-663cadefaf49-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bb71c8c2-ca64-4a55-b962-663cadefaf49-part16', 'scsi-SQEMU_QEMU_HARDDISK_bb71c8c2-ca64-4a55-b962-663cadefaf49-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 01:01:25.371945 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--c7c10550--c1bc--5fe3--90d5--7d7a9167f51f-osd--block--c7c10550--c1bc--5fe3--90d5--7d7a9167f51f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-0Pgqrb-Y4oO-t51v-LUqF-Xfe4-tPEB-8uA0p8', 'scsi-0QEMU_QEMU_HARDDISK_e5794c61-1895-432b-bae0-e64b20adb363', 'scsi-SQEMU_QEMU_HARDDISK_e5794c61-1895-432b-bae0-e64b20adb363'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 01:01:25.371953 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--d3162267--511d--5f73--a1c4--60a47e452e5f-osd--block--d3162267--511d--5f73--a1c4--60a47e452e5f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-X6NvH4-s8a1-fThR-cuqO-gA38-WCiF-j7Gb9y', 'scsi-0QEMU_QEMU_HARDDISK_b090d968-077f-4316-a7cb-bda539f6db67', 'scsi-SQEMU_QEMU_HARDDISK_b090d968-077f-4316-a7cb-bda539f6db67'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 01:01:25.371960 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ac6b0a42-475e-47b3-b6b9-8775ae6256f7', 'scsi-SQEMU_QEMU_HARDDISK_ac6b0a42-475e-47b3-b6b9-8775ae6256f7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 01:01:25.371964 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-01-00-03-43-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-01 01:01:25.371968 | orchestrator | skipping: [testbed-node-5] 2026-04-01 01:01:25.371972 | orchestrator | 2026-04-01 01:01:25.371977 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-01 01:01:25.371981 | orchestrator | Wednesday 01 April 2026 00:59:44 +0000 (0:00:00.513) 0:00:17.054 ******* 2026-04-01 01:01:25.371984 | orchestrator | ok: [testbed-node-3] 2026-04-01 01:01:25.371989 | orchestrator | ok: [testbed-node-4] 2026-04-01 01:01:25.371993 | orchestrator | ok: [testbed-node-5] 2026-04-01 01:01:25.371997 | orchestrator | 2026-04-01 01:01:25.372001 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-01 01:01:25.372005 | orchestrator | Wednesday 01 April 2026 00:59:45 +0000 (0:00:00.703) 0:00:17.757 ******* 2026-04-01 01:01:25.372008 | orchestrator | ok: [testbed-node-3] 2026-04-01 01:01:25.372016 | orchestrator | ok: [testbed-node-4] 2026-04-01 01:01:25.372019 | orchestrator | ok: [testbed-node-5] 2026-04-01 01:01:25.372023 | orchestrator | 2026-04-01 01:01:25.372027 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-01 01:01:25.372031 | orchestrator | Wednesday 01 April 2026 00:59:45 +0000 (0:00:00.397) 0:00:18.155 ******* 2026-04-01 01:01:25.372035 | orchestrator | ok: [testbed-node-3] 2026-04-01 01:01:25.372039 | orchestrator | ok: [testbed-node-4] 2026-04-01 01:01:25.372043 | orchestrator | ok: [testbed-node-5] 2026-04-01 01:01:25.372049 | orchestrator | 2026-04-01 01:01:25.372053 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-01 01:01:25.372057 | orchestrator | Wednesday 01 April 2026 00:59:46 +0000 (0:00:00.684) 0:00:18.839 ******* 2026-04-01 01:01:25.372061 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:01:25.372065 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:01:25.372069 | orchestrator | skipping: [testbed-node-5] 2026-04-01 01:01:25.372073 | orchestrator | 2026-04-01 01:01:25.372077 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-01 01:01:25.372081 | orchestrator | Wednesday 01 April 2026 00:59:46 +0000 (0:00:00.252) 0:00:19.091 ******* 2026-04-01 01:01:25.372084 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:01:25.372088 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:01:25.372092 | orchestrator | skipping: [testbed-node-5] 2026-04-01 01:01:25.372096 | orchestrator | 2026-04-01 01:01:25.372100 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-01 01:01:25.372104 | orchestrator | Wednesday 01 April 2026 00:59:46 +0000 (0:00:00.402) 0:00:19.494 ******* 2026-04-01 01:01:25.372108 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:01:25.372112 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:01:25.372115 | orchestrator | skipping: [testbed-node-5] 2026-04-01 01:01:25.372119 | orchestrator | 2026-04-01 01:01:25.372123 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-01 01:01:25.372127 | orchestrator | Wednesday 01 April 2026 00:59:47 +0000 (0:00:00.433) 0:00:19.928 ******* 2026-04-01 01:01:25.372131 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-04-01 01:01:25.372135 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-04-01 01:01:25.372139 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-04-01 01:01:25.372142 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-04-01 01:01:25.372146 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-04-01 01:01:25.372150 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-04-01 01:01:25.372154 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-04-01 01:01:25.372158 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-04-01 01:01:25.372162 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-04-01 01:01:25.372165 | orchestrator | 2026-04-01 01:01:25.372169 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-01 01:01:25.372173 | orchestrator | Wednesday 01 April 2026 00:59:47 +0000 (0:00:00.742) 0:00:20.670 ******* 2026-04-01 01:01:25.372177 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-01 01:01:25.372181 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-01 01:01:25.372185 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-01 01:01:25.372189 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:01:25.372193 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-01 01:01:25.372197 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-01 01:01:25.372201 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-01 01:01:25.372205 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:01:25.372208 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-01 01:01:25.372237 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-01 01:01:25.372246 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-01 01:01:25.372253 | orchestrator | skipping: [testbed-node-5] 2026-04-01 01:01:25.372258 | orchestrator | 2026-04-01 01:01:25.372262 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-01 01:01:25.372265 | orchestrator | Wednesday 01 April 2026 00:59:48 +0000 (0:00:00.313) 0:00:20.984 ******* 2026-04-01 01:01:25.372270 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 01:01:25.372274 | orchestrator | 2026-04-01 01:01:25.372278 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-01 01:01:25.372284 | orchestrator | Wednesday 01 April 2026 00:59:48 +0000 (0:00:00.593) 0:00:21.577 ******* 2026-04-01 01:01:25.372288 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:01:25.372292 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:01:25.372296 | orchestrator | skipping: [testbed-node-5] 2026-04-01 01:01:25.372300 | orchestrator | 2026-04-01 01:01:25.372304 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-01 01:01:25.372308 | orchestrator | Wednesday 01 April 2026 00:59:49 +0000 (0:00:00.287) 0:00:21.864 ******* 2026-04-01 01:01:25.372312 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:01:25.372316 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:01:25.372320 | orchestrator | skipping: [testbed-node-5] 2026-04-01 01:01:25.372324 | orchestrator | 2026-04-01 01:01:25.372328 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-01 01:01:25.372331 | orchestrator | Wednesday 01 April 2026 00:59:49 +0000 (0:00:00.262) 0:00:22.127 ******* 2026-04-01 01:01:25.372335 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:01:25.372339 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:01:25.372343 | orchestrator | skipping: [testbed-node-5] 2026-04-01 01:01:25.372347 | orchestrator | 2026-04-01 01:01:25.372351 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-01 01:01:25.372355 | orchestrator | Wednesday 01 April 2026 00:59:49 +0000 (0:00:00.280) 0:00:22.407 ******* 2026-04-01 01:01:25.372359 | orchestrator | ok: [testbed-node-3] 2026-04-01 01:01:25.372363 | orchestrator | ok: [testbed-node-4] 2026-04-01 01:01:25.372367 | orchestrator | ok: [testbed-node-5] 2026-04-01 01:01:25.372370 | orchestrator | 2026-04-01 01:01:25.372374 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-01 01:01:25.372378 | orchestrator | Wednesday 01 April 2026 00:59:50 +0000 (0:00:00.508) 0:00:22.916 ******* 2026-04-01 01:01:25.372382 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-01 01:01:25.372386 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-01 01:01:25.372394 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-01 01:01:25.372398 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:01:25.372402 | orchestrator | 2026-04-01 01:01:25.372405 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-01 01:01:25.372410 | orchestrator | Wednesday 01 April 2026 00:59:50 +0000 (0:00:00.346) 0:00:23.263 ******* 2026-04-01 01:01:25.372413 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-01 01:01:25.372417 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-01 01:01:25.372421 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-01 01:01:25.372425 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:01:25.372429 | orchestrator | 2026-04-01 01:01:25.372432 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-01 01:01:25.372436 | orchestrator | Wednesday 01 April 2026 00:59:50 +0000 (0:00:00.334) 0:00:23.598 ******* 2026-04-01 01:01:25.372441 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-01 01:01:25.372445 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-01 01:01:25.372450 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-01 01:01:25.372462 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:01:25.372467 | orchestrator | 2026-04-01 01:01:25.372471 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-01 01:01:25.372475 | orchestrator | Wednesday 01 April 2026 00:59:51 +0000 (0:00:00.344) 0:00:23.942 ******* 2026-04-01 01:01:25.372480 | orchestrator | ok: [testbed-node-3] 2026-04-01 01:01:25.372485 | orchestrator | ok: [testbed-node-4] 2026-04-01 01:01:25.372489 | orchestrator | ok: [testbed-node-5] 2026-04-01 01:01:25.372495 | orchestrator | 2026-04-01 01:01:25.372501 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-01 01:01:25.372507 | orchestrator | Wednesday 01 April 2026 00:59:51 +0000 (0:00:00.295) 0:00:24.238 ******* 2026-04-01 01:01:25.372512 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-01 01:01:25.372518 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-01 01:01:25.372525 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-01 01:01:25.372530 | orchestrator | 2026-04-01 01:01:25.372537 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-01 01:01:25.372542 | orchestrator | Wednesday 01 April 2026 00:59:51 +0000 (0:00:00.461) 0:00:24.700 ******* 2026-04-01 01:01:25.372548 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-01 01:01:25.372554 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-01 01:01:25.372560 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-01 01:01:25.372566 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-01 01:01:25.372572 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-01 01:01:25.372578 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-01 01:01:25.372584 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-01 01:01:25.372590 | orchestrator | 2026-04-01 01:01:25.372596 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-01 01:01:25.372605 | orchestrator | Wednesday 01 April 2026 00:59:52 +0000 (0:00:00.850) 0:00:25.550 ******* 2026-04-01 01:01:25.372612 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-01 01:01:25.372618 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-01 01:01:25.372624 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-01 01:01:25.372631 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-01 01:01:25.372637 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-01 01:01:25.372644 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-01 01:01:25.372649 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-01 01:01:25.372654 | orchestrator | 2026-04-01 01:01:25.372658 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-04-01 01:01:25.372663 | orchestrator | Wednesday 01 April 2026 00:59:54 +0000 (0:00:01.644) 0:00:27.195 ******* 2026-04-01 01:01:25.372668 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:01:25.372672 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:01:25.372676 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-04-01 01:01:25.372680 | orchestrator | 2026-04-01 01:01:25.372684 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-04-01 01:01:25.372688 | orchestrator | Wednesday 01 April 2026 00:59:54 +0000 (0:00:00.326) 0:00:27.521 ******* 2026-04-01 01:01:25.372692 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-01 01:01:25.372702 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-01 01:01:25.372710 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-01 01:01:25.372714 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-01 01:01:25.372720 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-01 01:01:25.372727 | orchestrator | 2026-04-01 01:01:25.372732 | orchestrator | TASK [generate keys] *********************************************************** 2026-04-01 01:01:25.372739 | orchestrator | Wednesday 01 April 2026 01:00:37 +0000 (0:00:43.082) 0:01:10.604 ******* 2026-04-01 01:01:25.372745 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-01 01:01:25.372752 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-01 01:01:25.372758 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-01 01:01:25.372764 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-01 01:01:25.372771 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-01 01:01:25.372775 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-01 01:01:25.372779 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-04-01 01:01:25.372783 | orchestrator | 2026-04-01 01:01:25.372787 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-04-01 01:01:25.372791 | orchestrator | Wednesday 01 April 2026 01:00:58 +0000 (0:00:20.660) 0:01:31.264 ******* 2026-04-01 01:01:25.372795 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-01 01:01:25.372799 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-01 01:01:25.372802 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-01 01:01:25.372806 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-01 01:01:25.372810 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-01 01:01:25.372814 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-01 01:01:25.372818 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-01 01:01:25.372822 | orchestrator | 2026-04-01 01:01:25.372829 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-04-01 01:01:25.372833 | orchestrator | Wednesday 01 April 2026 01:01:08 +0000 (0:00:09.926) 0:01:41.190 ******* 2026-04-01 01:01:25.372837 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-01 01:01:25.372841 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-01 01:01:25.372847 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-01 01:01:25.372852 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-01 01:01:25.372865 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-01 01:01:25.372871 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-01 01:01:25.372881 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-01 01:01:25.372889 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-01 01:01:25.372895 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-01 01:01:25.372900 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-01 01:01:25.372906 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-01 01:01:25.372912 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-01 01:01:25.372917 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-01 01:01:25.372923 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-01 01:01:25.372928 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-01 01:01:25.372934 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-01 01:01:25.372940 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-01 01:01:25.372946 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-01 01:01:25.372952 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-04-01 01:01:25.372959 | orchestrator | 2026-04-01 01:01:25.372970 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 01:01:25.372977 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-04-01 01:01:25.372985 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-04-01 01:01:25.372991 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-04-01 01:01:25.372998 | orchestrator | 2026-04-01 01:01:25.373004 | orchestrator | 2026-04-01 01:01:25.373010 | orchestrator | 2026-04-01 01:01:25.373016 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 01:01:25.373022 | orchestrator | Wednesday 01 April 2026 01:01:24 +0000 (0:00:16.231) 0:01:57.423 ******* 2026-04-01 01:01:25.373028 | orchestrator | =============================================================================== 2026-04-01 01:01:25.373035 | orchestrator | create openstack pool(s) ----------------------------------------------- 43.08s 2026-04-01 01:01:25.373041 | orchestrator | generate keys ---------------------------------------------------------- 20.66s 2026-04-01 01:01:25.373048 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 16.23s 2026-04-01 01:01:25.373054 | orchestrator | get keys from monitors -------------------------------------------------- 9.93s 2026-04-01 01:01:25.373060 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.89s 2026-04-01 01:01:25.373067 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.81s 2026-04-01 01:01:25.373073 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.64s 2026-04-01 01:01:25.373079 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.94s 2026-04-01 01:01:25.373085 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.86s 2026-04-01 01:01:25.373091 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.85s 2026-04-01 01:01:25.373097 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.85s 2026-04-01 01:01:25.373103 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.74s 2026-04-01 01:01:25.373116 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.70s 2026-04-01 01:01:25.373123 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.68s 2026-04-01 01:01:25.373129 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.59s 2026-04-01 01:01:25.373135 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.59s 2026-04-01 01:01:25.373142 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.51s 2026-04-01 01:01:25.373148 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.51s 2026-04-01 01:01:25.373154 | orchestrator | ceph-facts : Set_fact fsid ---------------------------------------------- 0.49s 2026-04-01 01:01:25.373160 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.47s 2026-04-01 01:01:25.373171 | orchestrator | 2026-04-01 01:01:25 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:01:28.415066 | orchestrator | 2026-04-01 01:01:28 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:01:28.416552 | orchestrator | 2026-04-01 01:01:28 | INFO  | Task 453ef8b4-8346-4ed0-a959-b3fbfbfa3de3 is in state STARTED 2026-04-01 01:01:28.416725 | orchestrator | 2026-04-01 01:01:28 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:01:31.457635 | orchestrator | 2026-04-01 01:01:31 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:01:31.458910 | orchestrator | 2026-04-01 01:01:31 | INFO  | Task 453ef8b4-8346-4ed0-a959-b3fbfbfa3de3 is in state STARTED 2026-04-01 01:01:31.458944 | orchestrator | 2026-04-01 01:01:31 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:01:34.501541 | orchestrator | 2026-04-01 01:01:34 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:01:34.503889 | orchestrator | 2026-04-01 01:01:34 | INFO  | Task 453ef8b4-8346-4ed0-a959-b3fbfbfa3de3 is in state STARTED 2026-04-01 01:01:34.503936 | orchestrator | 2026-04-01 01:01:34 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:01:37.535324 | orchestrator | 2026-04-01 01:01:37 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:01:37.538179 | orchestrator | 2026-04-01 01:01:37 | INFO  | Task 453ef8b4-8346-4ed0-a959-b3fbfbfa3de3 is in state STARTED 2026-04-01 01:01:37.538227 | orchestrator | 2026-04-01 01:01:37 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:01:40.582353 | orchestrator | 2026-04-01 01:01:40 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:01:40.583523 | orchestrator | 2026-04-01 01:01:40 | INFO  | Task 453ef8b4-8346-4ed0-a959-b3fbfbfa3de3 is in state STARTED 2026-04-01 01:01:40.583596 | orchestrator | 2026-04-01 01:01:40 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:01:43.628331 | orchestrator | 2026-04-01 01:01:43 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:01:43.629549 | orchestrator | 2026-04-01 01:01:43 | INFO  | Task 453ef8b4-8346-4ed0-a959-b3fbfbfa3de3 is in state STARTED 2026-04-01 01:01:43.629592 | orchestrator | 2026-04-01 01:01:43 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:01:46.680540 | orchestrator | 2026-04-01 01:01:46 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:01:46.682596 | orchestrator | 2026-04-01 01:01:46 | INFO  | Task 453ef8b4-8346-4ed0-a959-b3fbfbfa3de3 is in state STARTED 2026-04-01 01:01:46.682677 | orchestrator | 2026-04-01 01:01:46 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:01:49.725342 | orchestrator | 2026-04-01 01:01:49 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:01:49.728329 | orchestrator | 2026-04-01 01:01:49 | INFO  | Task 453ef8b4-8346-4ed0-a959-b3fbfbfa3de3 is in state STARTED 2026-04-01 01:01:49.728400 | orchestrator | 2026-04-01 01:01:49 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:01:52.767917 | orchestrator | 2026-04-01 01:01:52 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:01:52.768801 | orchestrator | 2026-04-01 01:01:52 | INFO  | Task 453ef8b4-8346-4ed0-a959-b3fbfbfa3de3 is in state STARTED 2026-04-01 01:01:52.769673 | orchestrator | 2026-04-01 01:01:52 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:01:55.812893 | orchestrator | 2026-04-01 01:01:55 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:01:55.814248 | orchestrator | 2026-04-01 01:01:55 | INFO  | Task 453ef8b4-8346-4ed0-a959-b3fbfbfa3de3 is in state STARTED 2026-04-01 01:01:55.814309 | orchestrator | 2026-04-01 01:01:55 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:01:58.868939 | orchestrator | 2026-04-01 01:01:58 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:01:58.870305 | orchestrator | 2026-04-01 01:01:58 | INFO  | Task 453ef8b4-8346-4ed0-a959-b3fbfbfa3de3 is in state STARTED 2026-04-01 01:01:58.870400 | orchestrator | 2026-04-01 01:01:58 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:02:01.921758 | orchestrator | 2026-04-01 01:02:01 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:02:01.923514 | orchestrator | 2026-04-01 01:02:01 | INFO  | Task 89a1e44d-4156-4395-b396-aa3229c63b64 is in state STARTED 2026-04-01 01:02:01.924676 | orchestrator | 2026-04-01 01:02:01 | INFO  | Task 453ef8b4-8346-4ed0-a959-b3fbfbfa3de3 is in state SUCCESS 2026-04-01 01:02:01.924721 | orchestrator | 2026-04-01 01:02:01 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:02:04.963389 | orchestrator | 2026-04-01 01:02:04 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:02:04.963651 | orchestrator | 2026-04-01 01:02:04 | INFO  | Task 89a1e44d-4156-4395-b396-aa3229c63b64 is in state STARTED 2026-04-01 01:02:04.964000 | orchestrator | 2026-04-01 01:02:04 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:02:08.008928 | orchestrator | 2026-04-01 01:02:08 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:02:08.009281 | orchestrator | 2026-04-01 01:02:08 | INFO  | Task 89a1e44d-4156-4395-b396-aa3229c63b64 is in state STARTED 2026-04-01 01:02:08.009321 | orchestrator | 2026-04-01 01:02:08 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:02:11.039216 | orchestrator | 2026-04-01 01:02:11 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:02:11.040433 | orchestrator | 2026-04-01 01:02:11 | INFO  | Task 89a1e44d-4156-4395-b396-aa3229c63b64 is in state STARTED 2026-04-01 01:02:11.041534 | orchestrator | 2026-04-01 01:02:11 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:02:14.073316 | orchestrator | 2026-04-01 01:02:14 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:02:14.076867 | orchestrator | 2026-04-01 01:02:14 | INFO  | Task 89a1e44d-4156-4395-b396-aa3229c63b64 is in state STARTED 2026-04-01 01:02:14.076939 | orchestrator | 2026-04-01 01:02:14 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:02:17.116305 | orchestrator | 2026-04-01 01:02:17 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:02:17.116485 | orchestrator | 2026-04-01 01:02:17 | INFO  | Task 89a1e44d-4156-4395-b396-aa3229c63b64 is in state STARTED 2026-04-01 01:02:17.116538 | orchestrator | 2026-04-01 01:02:17 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:02:20.164741 | orchestrator | 2026-04-01 01:02:20 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:02:20.166495 | orchestrator | 2026-04-01 01:02:20 | INFO  | Task 89a1e44d-4156-4395-b396-aa3229c63b64 is in state STARTED 2026-04-01 01:02:20.166571 | orchestrator | 2026-04-01 01:02:20 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:02:23.206730 | orchestrator | 2026-04-01 01:02:23 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:02:23.208466 | orchestrator | 2026-04-01 01:02:23 | INFO  | Task 89a1e44d-4156-4395-b396-aa3229c63b64 is in state STARTED 2026-04-01 01:02:23.208525 | orchestrator | 2026-04-01 01:02:23 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:02:26.246783 | orchestrator | 2026-04-01 01:02:26 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:02:26.251112 | orchestrator | 2026-04-01 01:02:26 | INFO  | Task 89a1e44d-4156-4395-b396-aa3229c63b64 is in state STARTED 2026-04-01 01:02:26.251186 | orchestrator | 2026-04-01 01:02:26 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:02:29.296399 | orchestrator | 2026-04-01 01:02:29 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:02:29.297605 | orchestrator | 2026-04-01 01:02:29 | INFO  | Task 89a1e44d-4156-4395-b396-aa3229c63b64 is in state STARTED 2026-04-01 01:02:29.297654 | orchestrator | 2026-04-01 01:02:29 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:02:32.333517 | orchestrator | 2026-04-01 01:02:32 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:02:32.335369 | orchestrator | 2026-04-01 01:02:32 | INFO  | Task 89a1e44d-4156-4395-b396-aa3229c63b64 is in state STARTED 2026-04-01 01:02:32.335451 | orchestrator | 2026-04-01 01:02:32 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:02:35.375624 | orchestrator | 2026-04-01 01:02:35 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:02:35.377339 | orchestrator | 2026-04-01 01:02:35 | INFO  | Task 89a1e44d-4156-4395-b396-aa3229c63b64 is in state STARTED 2026-04-01 01:02:35.377390 | orchestrator | 2026-04-01 01:02:35 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:02:38.416338 | orchestrator | 2026-04-01 01:02:38 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:02:38.417139 | orchestrator | 2026-04-01 01:02:38 | INFO  | Task 89a1e44d-4156-4395-b396-aa3229c63b64 is in state STARTED 2026-04-01 01:02:38.417245 | orchestrator | 2026-04-01 01:02:38 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:02:41.452286 | orchestrator | 2026-04-01 01:02:41 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:02:41.453854 | orchestrator | 2026-04-01 01:02:41 | INFO  | Task 89a1e44d-4156-4395-b396-aa3229c63b64 is in state STARTED 2026-04-01 01:02:41.453893 | orchestrator | 2026-04-01 01:02:41 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:02:44.498593 | orchestrator | 2026-04-01 01:02:44 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:02:44.502092 | orchestrator | 2026-04-01 01:02:44 | INFO  | Task 89a1e44d-4156-4395-b396-aa3229c63b64 is in state STARTED 2026-04-01 01:02:44.502138 | orchestrator | 2026-04-01 01:02:44 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:02:47.553254 | orchestrator | 2026-04-01 01:02:47 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:02:47.555722 | orchestrator | 2026-04-01 01:02:47 | INFO  | Task 89a1e44d-4156-4395-b396-aa3229c63b64 is in state STARTED 2026-04-01 01:02:47.555772 | orchestrator | 2026-04-01 01:02:47 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:02:50.606561 | orchestrator | 2026-04-01 01:02:50 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:02:50.610121 | orchestrator | 2026-04-01 01:02:50 | INFO  | Task 89a1e44d-4156-4395-b396-aa3229c63b64 is in state STARTED 2026-04-01 01:02:50.610237 | orchestrator | 2026-04-01 01:02:50 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:02:53.650709 | orchestrator | 2026-04-01 01:02:53 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:02:53.652423 | orchestrator | 2026-04-01 01:02:53 | INFO  | Task 89a1e44d-4156-4395-b396-aa3229c63b64 is in state STARTED 2026-04-01 01:02:53.652636 | orchestrator | 2026-04-01 01:02:53 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:02:56.700965 | orchestrator | 2026-04-01 01:02:56 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:02:56.705111 | orchestrator | 2026-04-01 01:02:56.705165 | orchestrator | 2026-04-01 01:02:56.705174 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-04-01 01:02:56.705181 | orchestrator | 2026-04-01 01:02:56.705188 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-04-01 01:02:56.705194 | orchestrator | Wednesday 01 April 2026 01:01:27 +0000 (0:00:00.195) 0:00:00.196 ******* 2026-04-01 01:02:56.705201 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-04-01 01:02:56.705208 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-01 01:02:56.705215 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-01 01:02:56.705221 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-04-01 01:02:56.705227 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-01 01:02:56.705234 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-04-01 01:02:56.705240 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-04-01 01:02:56.705247 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-04-01 01:02:56.705253 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-04-01 01:02:56.705259 | orchestrator | 2026-04-01 01:02:56.705265 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-04-01 01:02:56.705271 | orchestrator | Wednesday 01 April 2026 01:01:31 +0000 (0:00:04.055) 0:00:04.251 ******* 2026-04-01 01:02:56.705277 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-04-01 01:02:56.705283 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-01 01:02:56.705289 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-01 01:02:56.705296 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-04-01 01:02:56.705302 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-01 01:02:56.705308 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-04-01 01:02:56.705314 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-04-01 01:02:56.705338 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-04-01 01:02:56.705345 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-04-01 01:02:56.705352 | orchestrator | 2026-04-01 01:02:56.705358 | orchestrator | TASK [Create share directory] ************************************************** 2026-04-01 01:02:56.705364 | orchestrator | Wednesday 01 April 2026 01:01:35 +0000 (0:00:03.454) 0:00:07.706 ******* 2026-04-01 01:02:56.705371 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-01 01:02:56.705377 | orchestrator | 2026-04-01 01:02:56.705383 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-04-01 01:02:56.705389 | orchestrator | Wednesday 01 April 2026 01:01:36 +0000 (0:00:00.935) 0:00:08.641 ******* 2026-04-01 01:02:56.705478 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-04-01 01:02:56.705487 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-04-01 01:02:56.705494 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-04-01 01:02:56.705538 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-04-01 01:02:56.705548 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-04-01 01:02:56.705554 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-04-01 01:02:56.705561 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-04-01 01:02:56.705567 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-04-01 01:02:56.705573 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-04-01 01:02:56.705580 | orchestrator | 2026-04-01 01:02:56.705586 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-04-01 01:02:56.705593 | orchestrator | Wednesday 01 April 2026 01:01:49 +0000 (0:00:13.193) 0:00:21.835 ******* 2026-04-01 01:02:56.705599 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-04-01 01:02:56.705606 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-04-01 01:02:56.705613 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-04-01 01:02:56.705629 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-04-01 01:02:56.705648 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-04-01 01:02:56.705655 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-04-01 01:02:56.705661 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-04-01 01:02:56.705668 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-04-01 01:02:56.705674 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-04-01 01:02:56.705681 | orchestrator | 2026-04-01 01:02:56.705687 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-04-01 01:02:56.705693 | orchestrator | Wednesday 01 April 2026 01:01:52 +0000 (0:00:02.989) 0:00:24.824 ******* 2026-04-01 01:02:56.705700 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-04-01 01:02:56.705706 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-04-01 01:02:56.705712 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-04-01 01:02:56.705720 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-04-01 01:02:56.705726 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-04-01 01:02:56.705742 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-04-01 01:02:56.705749 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-04-01 01:02:56.705755 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-04-01 01:02:56.705760 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-04-01 01:02:56.705767 | orchestrator | 2026-04-01 01:02:56.705774 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 01:02:56.705781 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 01:02:56.705789 | orchestrator | 2026-04-01 01:02:56.705796 | orchestrator | 2026-04-01 01:02:56.705803 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 01:02:56.705810 | orchestrator | Wednesday 01 April 2026 01:01:59 +0000 (0:00:06.753) 0:00:31.578 ******* 2026-04-01 01:02:56.705817 | orchestrator | =============================================================================== 2026-04-01 01:02:56.705824 | orchestrator | Write ceph keys to the share directory --------------------------------- 13.19s 2026-04-01 01:02:56.705832 | orchestrator | Write ceph keys to the configuration directory -------------------------- 6.75s 2026-04-01 01:02:56.705839 | orchestrator | Check if ceph keys exist ------------------------------------------------ 4.06s 2026-04-01 01:02:56.705846 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 3.45s 2026-04-01 01:02:56.705853 | orchestrator | Check if target directories exist --------------------------------------- 2.99s 2026-04-01 01:02:56.705861 | orchestrator | Create share directory -------------------------------------------------- 0.93s 2026-04-01 01:02:56.705868 | orchestrator | 2026-04-01 01:02:56.705875 | orchestrator | 2026-04-01 01:02:56.705882 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-04-01 01:02:56.705889 | orchestrator | 2026-04-01 01:02:56.705895 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-04-01 01:02:56.705901 | orchestrator | Wednesday 01 April 2026 01:02:02 +0000 (0:00:00.264) 0:00:00.264 ******* 2026-04-01 01:02:56.705907 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-04-01 01:02:56.705914 | orchestrator | 2026-04-01 01:02:56.705920 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-04-01 01:02:56.705927 | orchestrator | Wednesday 01 April 2026 01:02:02 +0000 (0:00:00.204) 0:00:00.469 ******* 2026-04-01 01:02:56.705934 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-04-01 01:02:56.705941 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-04-01 01:02:56.705948 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-04-01 01:02:56.705955 | orchestrator | 2026-04-01 01:02:56.705962 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-04-01 01:02:56.705970 | orchestrator | Wednesday 01 April 2026 01:02:04 +0000 (0:00:01.517) 0:00:01.987 ******* 2026-04-01 01:02:56.705977 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-04-01 01:02:56.705983 | orchestrator | 2026-04-01 01:02:56.705990 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-04-01 01:02:56.705996 | orchestrator | Wednesday 01 April 2026 01:02:05 +0000 (0:00:01.114) 0:00:03.101 ******* 2026-04-01 01:02:56.706003 | orchestrator | changed: [testbed-manager] 2026-04-01 01:02:56.706010 | orchestrator | 2026-04-01 01:02:56.706049 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-04-01 01:02:56.706056 | orchestrator | Wednesday 01 April 2026 01:02:06 +0000 (0:00:00.821) 0:00:03.923 ******* 2026-04-01 01:02:56.706063 | orchestrator | changed: [testbed-manager] 2026-04-01 01:02:56.706069 | orchestrator | 2026-04-01 01:02:56.706076 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-04-01 01:02:56.706088 | orchestrator | Wednesday 01 April 2026 01:02:07 +0000 (0:00:00.895) 0:00:04.818 ******* 2026-04-01 01:02:56.706099 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-04-01 01:02:56.706106 | orchestrator | ok: [testbed-manager] 2026-04-01 01:02:56.706113 | orchestrator | 2026-04-01 01:02:56.706120 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-04-01 01:02:56.706135 | orchestrator | Wednesday 01 April 2026 01:02:47 +0000 (0:00:40.285) 0:00:45.104 ******* 2026-04-01 01:02:56.706143 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-04-01 01:02:56.706150 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-04-01 01:02:56.706157 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-04-01 01:02:56.706163 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-04-01 01:02:56.706170 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-04-01 01:02:56.706178 | orchestrator | 2026-04-01 01:02:56.706186 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-04-01 01:02:56.706193 | orchestrator | Wednesday 01 April 2026 01:02:51 +0000 (0:00:03.807) 0:00:48.912 ******* 2026-04-01 01:02:56.706200 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-04-01 01:02:56.706207 | orchestrator | 2026-04-01 01:02:56.706215 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-04-01 01:02:56.706222 | orchestrator | Wednesday 01 April 2026 01:02:51 +0000 (0:00:00.512) 0:00:49.425 ******* 2026-04-01 01:02:56.706228 | orchestrator | skipping: [testbed-manager] 2026-04-01 01:02:56.706235 | orchestrator | 2026-04-01 01:02:56.706241 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-04-01 01:02:56.706247 | orchestrator | Wednesday 01 April 2026 01:02:51 +0000 (0:00:00.105) 0:00:49.531 ******* 2026-04-01 01:02:56.706254 | orchestrator | skipping: [testbed-manager] 2026-04-01 01:02:56.706261 | orchestrator | 2026-04-01 01:02:56.706268 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-04-01 01:02:56.706275 | orchestrator | Wednesday 01 April 2026 01:02:52 +0000 (0:00:00.329) 0:00:49.860 ******* 2026-04-01 01:02:56.706282 | orchestrator | changed: [testbed-manager] 2026-04-01 01:02:56.706289 | orchestrator | 2026-04-01 01:02:56.706296 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-04-01 01:02:56.706303 | orchestrator | Wednesday 01 April 2026 01:02:53 +0000 (0:00:01.202) 0:00:51.063 ******* 2026-04-01 01:02:56.706310 | orchestrator | changed: [testbed-manager] 2026-04-01 01:02:56.706317 | orchestrator | 2026-04-01 01:02:56.706323 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-04-01 01:02:56.706329 | orchestrator | Wednesday 01 April 2026 01:02:53 +0000 (0:00:00.615) 0:00:51.679 ******* 2026-04-01 01:02:56.706336 | orchestrator | changed: [testbed-manager] 2026-04-01 01:02:56.706344 | orchestrator | 2026-04-01 01:02:56.706351 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-04-01 01:02:56.706358 | orchestrator | Wednesday 01 April 2026 01:02:54 +0000 (0:00:00.535) 0:00:52.214 ******* 2026-04-01 01:02:56.706365 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-04-01 01:02:56.706373 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-04-01 01:02:56.706380 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-04-01 01:02:56.706387 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-04-01 01:02:56.706394 | orchestrator | 2026-04-01 01:02:56.706401 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 01:02:56.706408 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-01 01:02:56.706414 | orchestrator | 2026-04-01 01:02:56.706421 | orchestrator | 2026-04-01 01:02:56.706429 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 01:02:56.706436 | orchestrator | Wednesday 01 April 2026 01:02:55 +0000 (0:00:01.307) 0:00:53.521 ******* 2026-04-01 01:02:56.706448 | orchestrator | =============================================================================== 2026-04-01 01:02:56.706455 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 40.29s 2026-04-01 01:02:56.706463 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 3.81s 2026-04-01 01:02:56.706470 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.52s 2026-04-01 01:02:56.706477 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.31s 2026-04-01 01:02:56.706483 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.20s 2026-04-01 01:02:56.706489 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.11s 2026-04-01 01:02:56.706496 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.90s 2026-04-01 01:02:56.706517 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.82s 2026-04-01 01:02:56.706525 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.61s 2026-04-01 01:02:56.706532 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.54s 2026-04-01 01:02:56.706539 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.51s 2026-04-01 01:02:56.706546 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.33s 2026-04-01 01:02:56.706554 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.20s 2026-04-01 01:02:56.706560 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.11s 2026-04-01 01:02:56.706567 | orchestrator | 2026-04-01 01:02:56 | INFO  | Task 89a1e44d-4156-4395-b396-aa3229c63b64 is in state SUCCESS 2026-04-01 01:02:56.706573 | orchestrator | 2026-04-01 01:02:56 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:02:59.813919 | orchestrator | 2026-04-01 01:02:59 | INFO  | Task f5064570-9646-4f52-9e58-c4ad00bb08ac is in state STARTED 2026-04-01 01:02:59.823141 | orchestrator | 2026-04-01 01:02:59 | INFO  | Task dcc3aa40-6d54-4e20-8849-385fb720883d is in state STARTED 2026-04-01 01:02:59.826272 | orchestrator | 2026-04-01 01:02:59 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:02:59.827326 | orchestrator | 2026-04-01 01:02:59 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:02:59.827980 | orchestrator | 2026-04-01 01:02:59 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:03:02.871230 | orchestrator | 2026-04-01 01:03:02 | INFO  | Task f5064570-9646-4f52-9e58-c4ad00bb08ac is in state STARTED 2026-04-01 01:03:02.871393 | orchestrator | 2026-04-01 01:03:02 | INFO  | Task dcc3aa40-6d54-4e20-8849-385fb720883d is in state STARTED 2026-04-01 01:03:02.874402 | orchestrator | 2026-04-01 01:03:02 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:03:02.874902 | orchestrator | 2026-04-01 01:03:02 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:03:02.874941 | orchestrator | 2026-04-01 01:03:02 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:03:05.912799 | orchestrator | 2026-04-01 01:03:05 | INFO  | Task f5064570-9646-4f52-9e58-c4ad00bb08ac is in state STARTED 2026-04-01 01:03:05.914151 | orchestrator | 2026-04-01 01:03:05 | INFO  | Task dcc3aa40-6d54-4e20-8849-385fb720883d is in state STARTED 2026-04-01 01:03:05.915009 | orchestrator | 2026-04-01 01:03:05 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:03:05.916473 | orchestrator | 2026-04-01 01:03:05 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:03:05.916502 | orchestrator | 2026-04-01 01:03:05 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:03:08.994162 | orchestrator | 2026-04-01 01:03:08 | INFO  | Task f5064570-9646-4f52-9e58-c4ad00bb08ac is in state STARTED 2026-04-01 01:03:08.995230 | orchestrator | 2026-04-01 01:03:08 | INFO  | Task dcc3aa40-6d54-4e20-8849-385fb720883d is in state STARTED 2026-04-01 01:03:08.996265 | orchestrator | 2026-04-01 01:03:08 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:03:08.997506 | orchestrator | 2026-04-01 01:03:09 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:03:08.997698 | orchestrator | 2026-04-01 01:03:09 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:03:12.028998 | orchestrator | 2026-04-01 01:03:12 | INFO  | Task f5064570-9646-4f52-9e58-c4ad00bb08ac is in state STARTED 2026-04-01 01:03:12.030090 | orchestrator | 2026-04-01 01:03:12 | INFO  | Task dcc3aa40-6d54-4e20-8849-385fb720883d is in state STARTED 2026-04-01 01:03:12.031313 | orchestrator | 2026-04-01 01:03:12 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:03:12.032701 | orchestrator | 2026-04-01 01:03:12 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:03:12.032753 | orchestrator | 2026-04-01 01:03:12 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:03:15.060059 | orchestrator | 2026-04-01 01:03:15 | INFO  | Task f5064570-9646-4f52-9e58-c4ad00bb08ac is in state STARTED 2026-04-01 01:03:15.060180 | orchestrator | 2026-04-01 01:03:15 | INFO  | Task dcc3aa40-6d54-4e20-8849-385fb720883d is in state STARTED 2026-04-01 01:03:15.061309 | orchestrator | 2026-04-01 01:03:15 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:03:15.061963 | orchestrator | 2026-04-01 01:03:15 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:03:15.062056 | orchestrator | 2026-04-01 01:03:15 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:03:18.097778 | orchestrator | 2026-04-01 01:03:18 | INFO  | Task f5064570-9646-4f52-9e58-c4ad00bb08ac is in state STARTED 2026-04-01 01:03:18.102394 | orchestrator | 2026-04-01 01:03:18 | INFO  | Task dcc3aa40-6d54-4e20-8849-385fb720883d is in state STARTED 2026-04-01 01:03:18.103829 | orchestrator | 2026-04-01 01:03:18 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:03:18.107326 | orchestrator | 2026-04-01 01:03:18 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:03:18.107375 | orchestrator | 2026-04-01 01:03:18 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:03:21.144683 | orchestrator | 2026-04-01 01:03:21 | INFO  | Task f5064570-9646-4f52-9e58-c4ad00bb08ac is in state STARTED 2026-04-01 01:03:21.146670 | orchestrator | 2026-04-01 01:03:21 | INFO  | Task dcc3aa40-6d54-4e20-8849-385fb720883d is in state STARTED 2026-04-01 01:03:21.148832 | orchestrator | 2026-04-01 01:03:21 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:03:21.150352 | orchestrator | 2026-04-01 01:03:21 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:03:21.150396 | orchestrator | 2026-04-01 01:03:21 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:03:24.189040 | orchestrator | 2026-04-01 01:03:24 | INFO  | Task f5064570-9646-4f52-9e58-c4ad00bb08ac is in state STARTED 2026-04-01 01:03:24.189813 | orchestrator | 2026-04-01 01:03:24 | INFO  | Task dcc3aa40-6d54-4e20-8849-385fb720883d is in state STARTED 2026-04-01 01:03:24.192906 | orchestrator | 2026-04-01 01:03:24 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:03:24.194041 | orchestrator | 2026-04-01 01:03:24 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:03:24.194089 | orchestrator | 2026-04-01 01:03:24 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:03:27.221278 | orchestrator | 2026-04-01 01:03:27 | INFO  | Task f5064570-9646-4f52-9e58-c4ad00bb08ac is in state STARTED 2026-04-01 01:03:27.223325 | orchestrator | 2026-04-01 01:03:27 | INFO  | Task dcc3aa40-6d54-4e20-8849-385fb720883d is in state STARTED 2026-04-01 01:03:27.225104 | orchestrator | 2026-04-01 01:03:27 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:03:27.226766 | orchestrator | 2026-04-01 01:03:27 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:03:27.226801 | orchestrator | 2026-04-01 01:03:27 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:03:30.274668 | orchestrator | 2026-04-01 01:03:30 | INFO  | Task f5064570-9646-4f52-9e58-c4ad00bb08ac is in state STARTED 2026-04-01 01:03:30.275754 | orchestrator | 2026-04-01 01:03:30 | INFO  | Task dcc3aa40-6d54-4e20-8849-385fb720883d is in state STARTED 2026-04-01 01:03:30.276998 | orchestrator | 2026-04-01 01:03:30 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:03:30.277805 | orchestrator | 2026-04-01 01:03:30 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:03:30.277833 | orchestrator | 2026-04-01 01:03:30 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:03:33.335334 | orchestrator | 2026-04-01 01:03:33 | INFO  | Task f5064570-9646-4f52-9e58-c4ad00bb08ac is in state STARTED 2026-04-01 01:03:33.338362 | orchestrator | 2026-04-01 01:03:33 | INFO  | Task dcc3aa40-6d54-4e20-8849-385fb720883d is in state STARTED 2026-04-01 01:03:33.339949 | orchestrator | 2026-04-01 01:03:33 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:03:33.341711 | orchestrator | 2026-04-01 01:03:33 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:03:33.341741 | orchestrator | 2026-04-01 01:03:33 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:03:36.390366 | orchestrator | 2026-04-01 01:03:36 | INFO  | Task f5064570-9646-4f52-9e58-c4ad00bb08ac is in state STARTED 2026-04-01 01:03:36.393035 | orchestrator | 2026-04-01 01:03:36 | INFO  | Task dcc3aa40-6d54-4e20-8849-385fb720883d is in state STARTED 2026-04-01 01:03:36.398165 | orchestrator | 2026-04-01 01:03:36 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:03:36.400399 | orchestrator | 2026-04-01 01:03:36 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:03:36.400458 | orchestrator | 2026-04-01 01:03:36 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:03:39.434461 | orchestrator | 2026-04-01 01:03:39 | INFO  | Task f5064570-9646-4f52-9e58-c4ad00bb08ac is in state STARTED 2026-04-01 01:03:39.434990 | orchestrator | 2026-04-01 01:03:39 | INFO  | Task dcc3aa40-6d54-4e20-8849-385fb720883d is in state STARTED 2026-04-01 01:03:39.436173 | orchestrator | 2026-04-01 01:03:39 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:03:39.437321 | orchestrator | 2026-04-01 01:03:39 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:03:39.437345 | orchestrator | 2026-04-01 01:03:39 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:03:42.477185 | orchestrator | 2026-04-01 01:03:42 | INFO  | Task f5064570-9646-4f52-9e58-c4ad00bb08ac is in state STARTED 2026-04-01 01:03:42.478001 | orchestrator | 2026-04-01 01:03:42 | INFO  | Task dcc3aa40-6d54-4e20-8849-385fb720883d is in state STARTED 2026-04-01 01:03:42.480948 | orchestrator | 2026-04-01 01:03:42 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:03:42.481795 | orchestrator | 2026-04-01 01:03:42 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:03:42.481835 | orchestrator | 2026-04-01 01:03:42 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:03:45.521939 | orchestrator | 2026-04-01 01:03:45 | INFO  | Task f5064570-9646-4f52-9e58-c4ad00bb08ac is in state STARTED 2026-04-01 01:03:45.523032 | orchestrator | 2026-04-01 01:03:45 | INFO  | Task dcc3aa40-6d54-4e20-8849-385fb720883d is in state STARTED 2026-04-01 01:03:45.524007 | orchestrator | 2026-04-01 01:03:45 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:03:45.525015 | orchestrator | 2026-04-01 01:03:45 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:03:45.525063 | orchestrator | 2026-04-01 01:03:45 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:03:48.557158 | orchestrator | 2026-04-01 01:03:48 | INFO  | Task f5064570-9646-4f52-9e58-c4ad00bb08ac is in state STARTED 2026-04-01 01:03:48.557624 | orchestrator | 2026-04-01 01:03:48 | INFO  | Task dcc3aa40-6d54-4e20-8849-385fb720883d is in state STARTED 2026-04-01 01:03:48.558512 | orchestrator | 2026-04-01 01:03:48 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:03:48.559291 | orchestrator | 2026-04-01 01:03:48 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:03:48.560215 | orchestrator | 2026-04-01 01:03:48 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:03:51.599594 | orchestrator | 2026-04-01 01:03:51 | INFO  | Task f5064570-9646-4f52-9e58-c4ad00bb08ac is in state STARTED 2026-04-01 01:03:51.600920 | orchestrator | 2026-04-01 01:03:51 | INFO  | Task dcc3aa40-6d54-4e20-8849-385fb720883d is in state STARTED 2026-04-01 01:03:51.601860 | orchestrator | 2026-04-01 01:03:51 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:03:51.603142 | orchestrator | 2026-04-01 01:03:51 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:03:51.603181 | orchestrator | 2026-04-01 01:03:51 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:03:54.645553 | orchestrator | 2026-04-01 01:03:54 | INFO  | Task f5064570-9646-4f52-9e58-c4ad00bb08ac is in state STARTED 2026-04-01 01:03:54.645930 | orchestrator | 2026-04-01 01:03:54 | INFO  | Task dcc3aa40-6d54-4e20-8849-385fb720883d is in state STARTED 2026-04-01 01:03:54.648032 | orchestrator | 2026-04-01 01:03:54 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:03:54.648078 | orchestrator | 2026-04-01 01:03:54 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:03:54.648085 | orchestrator | 2026-04-01 01:03:54 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:03:57.687348 | orchestrator | 2026-04-01 01:03:57 | INFO  | Task f5064570-9646-4f52-9e58-c4ad00bb08ac is in state STARTED 2026-04-01 01:03:57.687718 | orchestrator | 2026-04-01 01:03:57 | INFO  | Task dcc3aa40-6d54-4e20-8849-385fb720883d is in state STARTED 2026-04-01 01:03:57.688280 | orchestrator | 2026-04-01 01:03:57 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:03:57.689143 | orchestrator | 2026-04-01 01:03:57 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:03:57.689156 | orchestrator | 2026-04-01 01:03:57 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:04:00.738460 | orchestrator | 2026-04-01 01:04:00 | INFO  | Task f5064570-9646-4f52-9e58-c4ad00bb08ac is in state STARTED 2026-04-01 01:04:00.740119 | orchestrator | 2026-04-01 01:04:00 | INFO  | Task dcc3aa40-6d54-4e20-8849-385fb720883d is in state STARTED 2026-04-01 01:04:00.740812 | orchestrator | 2026-04-01 01:04:00 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:04:00.742162 | orchestrator | 2026-04-01 01:04:00 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:04:00.742201 | orchestrator | 2026-04-01 01:04:00 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:04:03.778149 | orchestrator | 2026-04-01 01:04:03 | INFO  | Task f5064570-9646-4f52-9e58-c4ad00bb08ac is in state STARTED 2026-04-01 01:04:03.779612 | orchestrator | 2026-04-01 01:04:03 | INFO  | Task dcc3aa40-6d54-4e20-8849-385fb720883d is in state STARTED 2026-04-01 01:04:03.780150 | orchestrator | 2026-04-01 01:04:03 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:04:03.780834 | orchestrator | 2026-04-01 01:04:03 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:04:03.780864 | orchestrator | 2026-04-01 01:04:03 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:04:06.824325 | orchestrator | 2026-04-01 01:04:06 | INFO  | Task f5064570-9646-4f52-9e58-c4ad00bb08ac is in state SUCCESS 2026-04-01 01:04:06.826160 | orchestrator | 2026-04-01 01:04:06.826200 | orchestrator | 2026-04-01 01:04:06.826207 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-01 01:04:06.826215 | orchestrator | 2026-04-01 01:04:06.826224 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-01 01:04:06.826229 | orchestrator | Wednesday 01 April 2026 01:02:59 +0000 (0:00:00.403) 0:00:00.403 ******* 2026-04-01 01:04:06.826234 | orchestrator | ok: [testbed-manager] 2026-04-01 01:04:06.826240 | orchestrator | ok: [testbed-node-0] 2026-04-01 01:04:06.826284 | orchestrator | ok: [testbed-node-1] 2026-04-01 01:04:06.826290 | orchestrator | ok: [testbed-node-2] 2026-04-01 01:04:06.826295 | orchestrator | ok: [testbed-node-3] 2026-04-01 01:04:06.826300 | orchestrator | ok: [testbed-node-4] 2026-04-01 01:04:06.826305 | orchestrator | ok: [testbed-node-5] 2026-04-01 01:04:06.826310 | orchestrator | 2026-04-01 01:04:06.826353 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-01 01:04:06.826360 | orchestrator | Wednesday 01 April 2026 01:03:00 +0000 (0:00:00.960) 0:00:01.364 ******* 2026-04-01 01:04:06.826366 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-04-01 01:04:06.826370 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-04-01 01:04:06.826373 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-04-01 01:04:06.826376 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-04-01 01:04:06.826380 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-04-01 01:04:06.826383 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-04-01 01:04:06.826387 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-04-01 01:04:06.826390 | orchestrator | 2026-04-01 01:04:06.826393 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-04-01 01:04:06.826396 | orchestrator | 2026-04-01 01:04:06.826399 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-04-01 01:04:06.826403 | orchestrator | Wednesday 01 April 2026 01:03:01 +0000 (0:00:00.915) 0:00:02.280 ******* 2026-04-01 01:04:06.826407 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 01:04:06.826411 | orchestrator | 2026-04-01 01:04:06.826414 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-04-01 01:04:06.826417 | orchestrator | Wednesday 01 April 2026 01:03:02 +0000 (0:00:01.093) 0:00:03.374 ******* 2026-04-01 01:04:06.826435 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-01 01:04:06.826441 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-01 01:04:06.826451 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 01:04:06.826464 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-01 01:04:06.826467 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-01 01:04:06.826471 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-01 01:04:06.826474 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 01:04:06.826480 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 01:04:06.826484 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-01 01:04:06.826488 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-01 01:04:06.826529 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 01:04:06.826537 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-01 01:04:06.826541 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-01 01:04:06.826544 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-01 01:04:06.826550 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 01:04:06.826554 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-01 01:04:06.826558 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-01 01:04:06.826565 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 01:04:06.826569 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 01:04:06.826572 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-01 01:04:06.826579 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-01 01:04:06.826583 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-01 01:04:06.826586 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 01:04:06.826589 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-01 01:04:06.826594 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-01 01:04:06.826600 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-01 01:04:06.826603 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-01 01:04:06.826609 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 01:04:06.826615 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 01:04:06.826620 | orchestrator | 2026-04-01 01:04:06.826625 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-04-01 01:04:06.826631 | orchestrator | Wednesday 01 April 2026 01:03:05 +0000 (0:00:03.239) 0:00:06.613 ******* 2026-04-01 01:04:06.826637 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-01 01:04:06.826642 | orchestrator | 2026-04-01 01:04:06.826647 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-04-01 01:04:06.826652 | orchestrator | Wednesday 01 April 2026 01:03:06 +0000 (0:00:01.452) 0:00:08.066 ******* 2026-04-01 01:04:06.826658 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-01 01:04:06.826874 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-01 01:04:06.826892 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-01 01:04:06.826901 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-01 01:04:06.826904 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 01:04:06.826908 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-01 01:04:06.826911 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-01 01:04:06.826914 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-01 01:04:06.826918 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 01:04:06.826931 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-01 01:04:06.826939 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 01:04:06.826948 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-01 01:04:06.826954 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-01 01:04:06.826959 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 01:04:06.827088 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 01:04:06.827099 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-01 01:04:06.827106 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-01 01:04:06.827121 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 01:04:06.827131 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-01 01:04:06.827135 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-01 01:04:06.827139 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-01 01:04:06.827142 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-01 01:04:06.827170 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-01 01:04:06.827177 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-01 01:04:06.827185 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-01 01:04:06.827189 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 01:04:06.827192 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 01:04:06.827195 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 01:04:06.827217 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 01:04:06.827221 | orchestrator | 2026-04-01 01:04:06.827225 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-04-01 01:04:06.827229 | orchestrator | Wednesday 01 April 2026 01:03:12 +0000 (0:00:05.200) 0:00:13.266 ******* 2026-04-01 01:04:06.827235 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-04-01 01:04:06.827244 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-01 01:04:06.827247 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-01 01:04:06.827251 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-01 01:04:06.827254 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 01:04:06.827257 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-01 01:04:06.827261 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-04-01 01:04:06.827269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 01:04:06.827274 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 01:04:06.827298 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 01:04:06.827301 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-01 01:04:06.827304 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 01:04:06.827308 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-01 01:04:06.827311 | orchestrator | skipping: [testbed-manager] 2026-04-01 01:04:06.827315 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-01 01:04:06.827323 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 01:04:06.827326 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:04:06.827332 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 01:04:06.827336 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:04:06.827339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 01:04:06.827342 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-01 01:04:06.827346 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-01 01:04:06.827349 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 01:04:06.827352 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-01 01:04:06.827356 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-01 01:04:06.827444 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-01 01:04:06.827450 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-01 01:04:06.827453 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:04:06.827457 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-01 01:04:06.827460 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:04:06.827464 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 01:04:06.827467 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-01 01:04:06.827470 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:04:06.827474 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-01 01:04:06.827480 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-01 01:04:06.827484 | orchestrator | skipping: [testbed-node-5] 2026-04-01 01:04:06.827487 | orchestrator | 2026-04-01 01:04:06.827491 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-04-01 01:04:06.827494 | orchestrator | Wednesday 01 April 2026 01:03:13 +0000 (0:00:01.910) 0:00:15.177 ******* 2026-04-01 01:04:06.827500 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-01 01:04:06.827517 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 01:04:06.827522 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-04-01 01:04:06.827527 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 01:04:06.827531 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-01 01:04:06.827539 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-01 01:04:06.827545 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 01:04:06.827763 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-01 01:04:06.827782 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-01 01:04:06.827791 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-01 01:04:06.827796 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 01:04:06.827802 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:04:06.827808 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-01 01:04:06.827819 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-01 01:04:06.827827 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-04-01 01:04:06.827856 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 01:04:06.827862 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 01:04:06.827866 | orchestrator | skipping: [testbed-manager] 2026-04-01 01:04:06.827871 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 01:04:06.827878 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-01 01:04:06.827889 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-01 01:04:06.827894 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:04:06.827899 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-01 01:04:06.827904 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-01 01:04:06.827925 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 01:04:06.827931 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:04:06.827935 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 01:04:06.827940 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-01 01:04:06.827945 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:04:06.827950 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-01 01:04:06.827958 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-01 01:04:06.827963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 01:04:06.827968 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:04:06.827972 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-01 01:04:06.827979 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-01 01:04:06.827984 | orchestrator | skipping: [testbed-node-5] 2026-04-01 01:04:06.827989 | orchestrator | 2026-04-01 01:04:06.828006 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-04-01 01:04:06.828011 | orchestrator | Wednesday 01 April 2026 01:03:16 +0000 (0:00:02.362) 0:00:17.540 ******* 2026-04-01 01:04:06.828016 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-01 01:04:06.828021 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-01 01:04:06.828032 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-01 01:04:06.828036 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-01 01:04:06.828041 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-01 01:04:06.828048 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-01 01:04:06.828064 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-01 01:04:06.828069 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-01 01:04:06.828074 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 01:04:06.828082 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 01:04:06.828087 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-01 01:04:06.828092 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 01:04:06.828097 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-01 01:04:06.828104 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-01 01:04:06.828120 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 01:04:06.828125 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-01 01:04:06.828132 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-01 01:04:06.828137 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 01:04:06.828142 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 01:04:06.828147 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-01 01:04:06.828165 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-01 01:04:06.828171 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-01 01:04:06.828176 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-01 01:04:06.828183 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-01 01:04:06.828187 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 01:04:06.828192 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-01 01:04:06.828197 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 01:04:06.828205 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 01:04:06.828222 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 01:04:06.828227 | orchestrator | 2026-04-01 01:04:06.828231 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-04-01 01:04:06.828237 | orchestrator | Wednesday 01 April 2026 01:03:21 +0000 (0:00:05.607) 0:00:23.147 ******* 2026-04-01 01:04:06.828245 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-01 01:04:06.828250 | orchestrator | 2026-04-01 01:04:06.828255 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-04-01 01:04:06.828260 | orchestrator | Wednesday 01 April 2026 01:03:22 +0000 (0:00:00.831) 0:00:23.978 ******* 2026-04-01 01:04:06.828265 | orchestrator | skipping: [testbed-manager] 2026-04-01 01:04:06.828269 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:04:06.828274 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:04:06.828279 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:04:06.828284 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:04:06.828289 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:04:06.828293 | orchestrator | skipping: [testbed-node-5] 2026-04-01 01:04:06.828298 | orchestrator | 2026-04-01 01:04:06.828303 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-04-01 01:04:06.828308 | orchestrator | Wednesday 01 April 2026 01:03:23 +0000 (0:00:00.690) 0:00:24.669 ******* 2026-04-01 01:04:06.828313 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-01 01:04:06.828318 | orchestrator | 2026-04-01 01:04:06.828322 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-04-01 01:04:06.828327 | orchestrator | Wednesday 01 April 2026 01:03:24 +0000 (0:00:00.680) 0:00:25.350 ******* 2026-04-01 01:04:06.828332 | orchestrator | [WARNING]: Skipped 2026-04-01 01:04:06.828337 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-01 01:04:06.828343 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-04-01 01:04:06.828348 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-01 01:04:06.828353 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-04-01 01:04:06.828358 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-01 01:04:06.828363 | orchestrator | [WARNING]: Skipped 2026-04-01 01:04:06.828368 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-01 01:04:06.828373 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-04-01 01:04:06.828378 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-01 01:04:06.828383 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-04-01 01:04:06.828388 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-01 01:04:06.828393 | orchestrator | [WARNING]: Skipped 2026-04-01 01:04:06.828398 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-01 01:04:06.828403 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-04-01 01:04:06.828408 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-01 01:04:06.828413 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-04-01 01:04:06.828423 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-01 01:04:06.828428 | orchestrator | [WARNING]: Skipped 2026-04-01 01:04:06.828433 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-01 01:04:06.828439 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-04-01 01:04:06.828444 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-01 01:04:06.828449 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-04-01 01:04:06.828455 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-01 01:04:06.828460 | orchestrator | [WARNING]: Skipped 2026-04-01 01:04:06.828465 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-01 01:04:06.828470 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-04-01 01:04:06.828476 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-01 01:04:06.828481 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-04-01 01:04:06.828486 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-01 01:04:06.828492 | orchestrator | [WARNING]: Skipped 2026-04-01 01:04:06.828547 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-01 01:04:06.828554 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-04-01 01:04:06.828559 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-01 01:04:06.828565 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-04-01 01:04:06.828570 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-01 01:04:06.828575 | orchestrator | [WARNING]: Skipped 2026-04-01 01:04:06.828581 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-01 01:04:06.828586 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-04-01 01:04:06.828594 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-01 01:04:06.828600 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-04-01 01:04:06.828605 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-01 01:04:06.828610 | orchestrator | 2026-04-01 01:04:06.828615 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-04-01 01:04:06.828634 | orchestrator | Wednesday 01 April 2026 01:03:25 +0000 (0:00:01.507) 0:00:26.857 ******* 2026-04-01 01:04:06.828640 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-01 01:04:06.828646 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:04:06.828651 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-01 01:04:06.828661 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:04:06.828667 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-01 01:04:06.828672 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:04:06.828677 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-01 01:04:06.828694 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:04:06.828700 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-01 01:04:06.828705 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:04:06.828710 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-01 01:04:06.828715 | orchestrator | skipping: [testbed-node-5] 2026-04-01 01:04:06.828720 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-04-01 01:04:06.828725 | orchestrator | 2026-04-01 01:04:06.828730 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-04-01 01:04:06.828736 | orchestrator | Wednesday 01 April 2026 01:03:38 +0000 (0:00:13.127) 0:00:39.985 ******* 2026-04-01 01:04:06.828741 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-01 01:04:06.828746 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:04:06.828751 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-01 01:04:06.828756 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:04:06.828761 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-01 01:04:06.828766 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:04:06.828770 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-01 01:04:06.828776 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:04:06.828780 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-01 01:04:06.828785 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:04:06.828790 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-01 01:04:06.828795 | orchestrator | skipping: [testbed-node-5] 2026-04-01 01:04:06.828806 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-04-01 01:04:06.828811 | orchestrator | 2026-04-01 01:04:06.828816 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-04-01 01:04:06.828820 | orchestrator | Wednesday 01 April 2026 01:03:41 +0000 (0:00:03.037) 0:00:43.023 ******* 2026-04-01 01:04:06.828826 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-01 01:04:06.828832 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-01 01:04:06.828837 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:04:06.828842 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:04:06.828848 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-01 01:04:06.828854 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:04:06.828857 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-01 01:04:06.828861 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:04:06.828864 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-01 01:04:06.828867 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:04:06.828870 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-01 01:04:06.828874 | orchestrator | skipping: [testbed-node-5] 2026-04-01 01:04:06.828877 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-04-01 01:04:06.828880 | orchestrator | 2026-04-01 01:04:06.828883 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-04-01 01:04:06.828886 | orchestrator | Wednesday 01 April 2026 01:03:43 +0000 (0:00:01.470) 0:00:44.493 ******* 2026-04-01 01:04:06.828890 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-01 01:04:06.828893 | orchestrator | 2026-04-01 01:04:06.828896 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-04-01 01:04:06.828902 | orchestrator | Wednesday 01 April 2026 01:03:44 +0000 (0:00:00.724) 0:00:45.218 ******* 2026-04-01 01:04:06.828905 | orchestrator | skipping: [testbed-manager] 2026-04-01 01:04:06.828909 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:04:06.828912 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:04:06.828915 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:04:06.828918 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:04:06.828921 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:04:06.828929 | orchestrator | skipping: [testbed-node-5] 2026-04-01 01:04:06.828933 | orchestrator | 2026-04-01 01:04:06.828936 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-04-01 01:04:06.828939 | orchestrator | Wednesday 01 April 2026 01:03:44 +0000 (0:00:00.692) 0:00:45.910 ******* 2026-04-01 01:04:06.828942 | orchestrator | skipping: [testbed-manager] 2026-04-01 01:04:06.828945 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:04:06.828949 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:04:06.828952 | orchestrator | skipping: [testbed-node-5] 2026-04-01 01:04:06.828955 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:04:06.828958 | orchestrator | changed: [testbed-node-1] 2026-04-01 01:04:06.828961 | orchestrator | changed: [testbed-node-2] 2026-04-01 01:04:06.828964 | orchestrator | 2026-04-01 01:04:06.828967 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-04-01 01:04:06.828971 | orchestrator | Wednesday 01 April 2026 01:03:46 +0000 (0:00:01.974) 0:00:47.884 ******* 2026-04-01 01:04:06.828974 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-01 01:04:06.828980 | orchestrator | skipping: [testbed-manager] 2026-04-01 01:04:06.828983 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-01 01:04:06.828986 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:04:06.828989 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-01 01:04:06.828993 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:04:06.828996 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-01 01:04:06.828999 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:04:06.829002 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-01 01:04:06.829005 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:04:06.829009 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-01 01:04:06.829012 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:04:06.829015 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-01 01:04:06.829018 | orchestrator | skipping: [testbed-node-5] 2026-04-01 01:04:06.829021 | orchestrator | 2026-04-01 01:04:06.829025 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-04-01 01:04:06.829028 | orchestrator | Wednesday 01 April 2026 01:03:48 +0000 (0:00:01.400) 0:00:49.285 ******* 2026-04-01 01:04:06.829031 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-01 01:04:06.829035 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:04:06.829038 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-01 01:04:06.829041 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:04:06.829044 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-01 01:04:06.829047 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:04:06.829050 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-01 01:04:06.829054 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:04:06.829057 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-01 01:04:06.829060 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:04:06.829063 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-04-01 01:04:06.829067 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-01 01:04:06.829070 | orchestrator | skipping: [testbed-node-5] 2026-04-01 01:04:06.829073 | orchestrator | 2026-04-01 01:04:06.829076 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-04-01 01:04:06.829079 | orchestrator | Wednesday 01 April 2026 01:03:49 +0000 (0:00:01.445) 0:00:50.731 ******* 2026-04-01 01:04:06.829083 | orchestrator | [WARNING]: Skipped 2026-04-01 01:04:06.829086 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-04-01 01:04:06.829090 | orchestrator | due to this access issue: 2026-04-01 01:04:06.829093 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-04-01 01:04:06.829096 | orchestrator | not a directory 2026-04-01 01:04:06.829099 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-01 01:04:06.829102 | orchestrator | 2026-04-01 01:04:06.829106 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-04-01 01:04:06.829109 | orchestrator | Wednesday 01 April 2026 01:03:50 +0000 (0:00:01.098) 0:00:51.829 ******* 2026-04-01 01:04:06.829112 | orchestrator | skipping: [testbed-manager] 2026-04-01 01:04:06.829115 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:04:06.829121 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:04:06.829124 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:04:06.829127 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:04:06.829130 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:04:06.829133 | orchestrator | skipping: [testbed-node-5] 2026-04-01 01:04:06.829137 | orchestrator | 2026-04-01 01:04:06.829142 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-04-01 01:04:06.829145 | orchestrator | Wednesday 01 April 2026 01:03:51 +0000 (0:00:00.627) 0:00:52.457 ******* 2026-04-01 01:04:06.829149 | orchestrator | skipping: [testbed-manager] 2026-04-01 01:04:06.829152 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:04:06.829155 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:04:06.829158 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:04:06.829161 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:04:06.829166 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:04:06.829170 | orchestrator | skipping: [testbed-node-5] 2026-04-01 01:04:06.829173 | orchestrator | 2026-04-01 01:04:06.829176 | orchestrator | TASK [service-check-containers : prometheus | Check containers] **************** 2026-04-01 01:04:06.829179 | orchestrator | Wednesday 01 April 2026 01:03:51 +0000 (0:00:00.734) 0:00:53.192 ******* 2026-04-01 01:04:06.829183 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-01 01:04:06.829187 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-01 01:04:06.829191 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-01 01:04:06.829195 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-01 01:04:06.829201 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-01 01:04:06.829207 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-01 01:04:06.829213 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 01:04:06.829218 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-01 01:04:06.829222 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-01 01:04:06.829226 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 01:04:06.829230 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-01 01:04:06.829234 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 01:04:06.829240 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 01:04:06.829247 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-01 01:04:06.829253 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-01 01:04:06.829257 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-01 01:04:06.829261 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 01:04:06.829265 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-01 01:04:06.829269 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-01 01:04:06.829275 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 01:04:06.829284 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-01 01:04:06.829288 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-01 01:04:06.829292 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-01 01:04:06.829296 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-01 01:04:06.829300 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 01:04:06.829308 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 01:04:06.829312 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-01 01:04:06.829318 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 01:04:06.829324 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-01 01:04:06.829328 | orchestrator | 2026-04-01 01:04:06.829332 | orchestrator | TASK [service-check-containers : prometheus | Notify handlers to restart containers] *** 2026-04-01 01:04:06.829336 | orchestrator | Wednesday 01 April 2026 01:03:56 +0000 (0:00:04.065) 0:00:57.257 ******* 2026-04-01 01:04:06.829340 | orchestrator | changed: [testbed-manager] => { 2026-04-01 01:04:06.829344 | orchestrator |  "msg": "Notifying handlers" 2026-04-01 01:04:06.829347 | orchestrator | } 2026-04-01 01:04:06.829351 | orchestrator | changed: [testbed-node-0] => { 2026-04-01 01:04:06.829355 | orchestrator |  "msg": "Notifying handlers" 2026-04-01 01:04:06.829359 | orchestrator | } 2026-04-01 01:04:06.829363 | orchestrator | changed: [testbed-node-1] => { 2026-04-01 01:04:06.829367 | orchestrator |  "msg": "Notifying handlers" 2026-04-01 01:04:06.829371 | orchestrator | } 2026-04-01 01:04:06.829375 | orchestrator | changed: [testbed-node-2] => { 2026-04-01 01:04:06.829381 | orchestrator |  "msg": "Notifying handlers" 2026-04-01 01:04:06.829386 | orchestrator | } 2026-04-01 01:04:06.829391 | orchestrator | changed: [testbed-node-3] => { 2026-04-01 01:04:06.829397 | orchestrator |  "msg": "Notifying handlers" 2026-04-01 01:04:06.829402 | orchestrator | } 2026-04-01 01:04:06.829407 | orchestrator | changed: [testbed-node-4] => { 2026-04-01 01:04:06.829413 | orchestrator |  "msg": "Notifying handlers" 2026-04-01 01:04:06.829418 | orchestrator | } 2026-04-01 01:04:06.829423 | orchestrator | changed: [testbed-node-5] => { 2026-04-01 01:04:06.829429 | orchestrator |  "msg": "Notifying handlers" 2026-04-01 01:04:06.829435 | orchestrator | } 2026-04-01 01:04:06.829441 | orchestrator | 2026-04-01 01:04:06.829447 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-01 01:04:06.829453 | orchestrator | Wednesday 01 April 2026 01:03:56 +0000 (0:00:00.761) 0:00:58.019 ******* 2026-04-01 01:04:06.829462 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-04-01 01:04:06.829467 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-01 01:04:06.829471 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-01 01:04:06.829480 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-04-01 01:04:06.829484 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 01:04:06.829490 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-01 01:04:06.829493 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 01:04:06.829496 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 01:04:06.829500 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-01 01:04:06.829505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 01:04:06.829511 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-01 01:04:06.829514 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 01:04:06.829518 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 01:04:06.829523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-01 01:04:06.829527 | orchestrator | skipping: [testbed-manager] 2026-04-01 01:04:06.829530 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 01:04:06.829533 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:04:06.829537 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-01 01:04:06.829540 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:04:06.829544 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 01:04:06.829548 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 01:04:06.829554 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-01 01:04:06.829557 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-01 01:04:06.829563 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:04:06.829566 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-01 01:04:06.829570 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-01 01:04:06.829573 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-01 01:04:06.829577 | orchestrator | skipping: [testbed-node-3] 2026-04-01 01:04:06.829580 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-01 01:04:06.829585 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-01 01:04:06.829591 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-01 01:04:06.829594 | orchestrator | skipping: [testbed-node-4] 2026-04-01 01:04:06.829598 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-01 01:04:06.829603 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-01 01:04:06.829607 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-01 01:04:06.829610 | orchestrator | skipping: [testbed-node-5] 2026-04-01 01:04:06.829613 | orchestrator | 2026-04-01 01:04:06.829616 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-04-01 01:04:06.829620 | orchestrator | Wednesday 01 April 2026 01:03:58 +0000 (0:00:01.729) 0:00:59.748 ******* 2026-04-01 01:04:06.829623 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-04-01 01:04:06.829626 | orchestrator | skipping: [testbed-manager] 2026-04-01 01:04:06.829629 | orchestrator | 2026-04-01 01:04:06.829632 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-01 01:04:06.829635 | orchestrator | Wednesday 01 April 2026 01:03:59 +0000 (0:00:01.012) 0:01:00.761 ******* 2026-04-01 01:04:06.829639 | orchestrator | 2026-04-01 01:04:06.829642 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-01 01:04:06.829645 | orchestrator | Wednesday 01 April 2026 01:03:59 +0000 (0:00:00.060) 0:01:00.821 ******* 2026-04-01 01:04:06.829648 | orchestrator | 2026-04-01 01:04:06.829651 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-01 01:04:06.829654 | orchestrator | Wednesday 01 April 2026 01:03:59 +0000 (0:00:00.192) 0:01:01.013 ******* 2026-04-01 01:04:06.829658 | orchestrator | 2026-04-01 01:04:06.829661 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-01 01:04:06.829664 | orchestrator | Wednesday 01 April 2026 01:03:59 +0000 (0:00:00.059) 0:01:01.073 ******* 2026-04-01 01:04:06.829667 | orchestrator | 2026-04-01 01:04:06.829671 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-01 01:04:06.829674 | orchestrator | Wednesday 01 April 2026 01:03:59 +0000 (0:00:00.059) 0:01:01.132 ******* 2026-04-01 01:04:06.829678 | orchestrator | 2026-04-01 01:04:06.829763 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-01 01:04:06.829791 | orchestrator | Wednesday 01 April 2026 01:03:59 +0000 (0:00:00.056) 0:01:01.189 ******* 2026-04-01 01:04:06.829795 | orchestrator | 2026-04-01 01:04:06.829798 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-01 01:04:06.829802 | orchestrator | Wednesday 01 April 2026 01:04:00 +0000 (0:00:00.060) 0:01:01.249 ******* 2026-04-01 01:04:06.829805 | orchestrator | 2026-04-01 01:04:06.829812 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-04-01 01:04:06.829815 | orchestrator | Wednesday 01 April 2026 01:04:00 +0000 (0:00:00.081) 0:01:01.331 ******* 2026-04-01 01:04:06.829828 | orchestrator | fatal: [testbed-manager]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=3.2.1.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fprometheus-server\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_s716miy6/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_s716miy6/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_s716miy6/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_s716miy6/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=3.2.1.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fprometheus-server: Bad Request (\"invalid reference format\")\\n'"} 2026-04-01 01:04:06.829838 | orchestrator | 2026-04-01 01:04:06.829841 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-04-01 01:04:06.829844 | orchestrator | Wednesday 01 April 2026 01:04:02 +0000 (0:00:02.154) 0:01:03.485 ******* 2026-04-01 01:04:06.829853 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=1.8.2.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fprometheus-node-exporter\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_ky2q_271/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_ky2q_271/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_ky2q_271/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_ky2q_271/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=1.8.2.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fprometheus-node-exporter: Bad Request (\"invalid reference format\")\\n'"} 2026-04-01 01:04:06.829859 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=1.8.2.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fprometheus-node-exporter\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_zizrwxxp/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_zizrwxxp/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_zizrwxxp/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_zizrwxxp/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=1.8.2.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fprometheus-node-exporter: Bad Request (\"invalid reference format\")\\n'"} 2026-04-01 01:04:06.829870 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=1.8.2.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fprometheus-node-exporter\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_30njxqjm/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_30njxqjm/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_30njxqjm/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_30njxqjm/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=1.8.2.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fprometheus-node-exporter: Bad Request (\"invalid reference format\")\\n'"} 2026-04-01 01:04:06.829877 | orchestrator | fatal: [testbed-node-3]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=1.8.2.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fprometheus-node-exporter\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_hzvy5jii/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_hzvy5jii/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_hzvy5jii/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_hzvy5jii/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=1.8.2.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fprometheus-node-exporter: Bad Request (\"invalid reference format\")\\n'"} 2026-04-01 01:04:06.829888 | orchestrator | fatal: [testbed-node-4]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=1.8.2.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fprometheus-node-exporter\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_07um8l54/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_07um8l54/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_07um8l54/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_07um8l54/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=1.8.2.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fprometheus-node-exporter: Bad Request (\"invalid reference format\")\\n'"} 2026-04-01 01:04:06.829896 | orchestrator | fatal: [testbed-node-5]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=1.8.2.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fprometheus-node-exporter\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_ny4guuae/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_ny4guuae/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_ny4guuae/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_ny4guuae/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=1.8.2.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fprometheus-node-exporter: Bad Request (\"invalid reference format\")\\n'"} 2026-04-01 01:04:06.829902 | orchestrator | 2026-04-01 01:04:06.829905 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 01:04:06.829909 | orchestrator | testbed-manager : ok=18  changed=9  unreachable=0 failed=1  skipped=10  rescued=0 ignored=0 2026-04-01 01:04:06.829912 | orchestrator | testbed-node-0 : ok=11  changed=6  unreachable=0 failed=1  skipped=12  rescued=0 ignored=0 2026-04-01 01:04:06.829915 | orchestrator | testbed-node-1 : ok=11  changed=6  unreachable=0 failed=1  skipped=12  rescued=0 ignored=0 2026-04-01 01:04:06.829918 | orchestrator | testbed-node-2 : ok=11  changed=6  unreachable=0 failed=1  skipped=12  rescued=0 ignored=0 2026-04-01 01:04:06.829922 | orchestrator | testbed-node-3 : ok=10  changed=5  unreachable=0 failed=1  skipped=13  rescued=0 ignored=0 2026-04-01 01:04:06.829925 | orchestrator | testbed-node-4 : ok=10  changed=5  unreachable=0 failed=1  skipped=13  rescued=0 ignored=0 2026-04-01 01:04:06.829928 | orchestrator | testbed-node-5 : ok=10  changed=5  unreachable=0 failed=1  skipped=13  rescued=0 ignored=0 2026-04-01 01:04:06.829931 | orchestrator | 2026-04-01 01:04:06.829934 | orchestrator | 2026-04-01 01:04:06.829938 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 01:04:06.829941 | orchestrator | Wednesday 01 April 2026 01:04:05 +0000 (0:00:03.599) 0:01:07.084 ******* 2026-04-01 01:04:06.829944 | orchestrator | =============================================================================== 2026-04-01 01:04:06.829947 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 13.13s 2026-04-01 01:04:06.829950 | orchestrator | prometheus : Copying over config.json files ----------------------------- 5.61s 2026-04-01 01:04:06.829953 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.20s 2026-04-01 01:04:06.829956 | orchestrator | service-check-containers : prometheus | Check containers ---------------- 4.07s 2026-04-01 01:04:06.829960 | orchestrator | prometheus : Restart prometheus-node-exporter container ----------------- 3.60s 2026-04-01 01:04:06.829963 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 3.24s 2026-04-01 01:04:06.829966 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 3.04s 2026-04-01 01:04:06.829969 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 2.36s 2026-04-01 01:04:06.829975 | orchestrator | prometheus : Restart prometheus-server container ------------------------ 2.15s 2026-04-01 01:04:06.829978 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 1.97s 2026-04-01 01:04:06.829981 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS certificate --- 1.91s 2026-04-01 01:04:06.829984 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.73s 2026-04-01 01:04:06.829987 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 1.51s 2026-04-01 01:04:06.829991 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 1.47s 2026-04-01 01:04:06.830126 | orchestrator | prometheus : include_tasks ---------------------------------------------- 1.45s 2026-04-01 01:04:06.830130 | orchestrator | prometheus : Copying config file for blackbox exporter ------------------ 1.45s 2026-04-01 01:04:06.830133 | orchestrator | prometheus : Copying cloud config file for openstack exporter ----------- 1.40s 2026-04-01 01:04:06.830136 | orchestrator | prometheus : Find extra prometheus server config files ------------------ 1.10s 2026-04-01 01:04:06.830139 | orchestrator | prometheus : include_tasks ---------------------------------------------- 1.09s 2026-04-01 01:04:06.830143 | orchestrator | prometheus : Creating prometheus database user and setting permissions --- 1.01s 2026-04-01 01:04:06.830146 | orchestrator | 2026-04-01 01:04:06 | INFO  | Task dcc3aa40-6d54-4e20-8849-385fb720883d is in state STARTED 2026-04-01 01:04:06.830151 | orchestrator | 2026-04-01 01:04:06 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:04:06.830157 | orchestrator | 2026-04-01 01:04:06 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:04:06.830160 | orchestrator | 2026-04-01 01:04:06 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:04:09.883594 | orchestrator | 2026-04-01 01:04:09 | INFO  | Task e9e6323b-4885-4268-880b-bad2b6ad5b20 is in state STARTED 2026-04-01 01:04:09.884724 | orchestrator | 2026-04-01 01:04:09 | INFO  | Task dcc3aa40-6d54-4e20-8849-385fb720883d is in state STARTED 2026-04-01 01:04:09.885725 | orchestrator | 2026-04-01 01:04:09 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:04:09.887157 | orchestrator | 2026-04-01 01:04:09 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:04:09.887236 | orchestrator | 2026-04-01 01:04:09 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:04:12.930266 | orchestrator | 2026-04-01 01:04:12 | INFO  | Task e9e6323b-4885-4268-880b-bad2b6ad5b20 is in state STARTED 2026-04-01 01:04:12.931635 | orchestrator | 2026-04-01 01:04:12 | INFO  | Task dcc3aa40-6d54-4e20-8849-385fb720883d is in state SUCCESS 2026-04-01 01:04:12.933326 | orchestrator | 2026-04-01 01:04:12 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:04:12.935043 | orchestrator | 2026-04-01 01:04:12 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:04:12.935153 | orchestrator | 2026-04-01 01:04:12 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:04:15.982501 | orchestrator | 2026-04-01 01:04:15 | INFO  | Task e9e6323b-4885-4268-880b-bad2b6ad5b20 is in state STARTED 2026-04-01 01:04:15.984551 | orchestrator | 2026-04-01 01:04:15 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:04:15.986319 | orchestrator | 2026-04-01 01:04:15 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:04:15.986372 | orchestrator | 2026-04-01 01:04:15 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:04:19.032628 | orchestrator | 2026-04-01 01:04:19 | INFO  | Task e9e6323b-4885-4268-880b-bad2b6ad5b20 is in state STARTED 2026-04-01 01:04:19.034088 | orchestrator | 2026-04-01 01:04:19 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:04:19.035888 | orchestrator | 2026-04-01 01:04:19 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:04:19.035922 | orchestrator | 2026-04-01 01:04:19 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:04:22.078371 | orchestrator | 2026-04-01 01:04:22 | INFO  | Task e9e6323b-4885-4268-880b-bad2b6ad5b20 is in state STARTED 2026-04-01 01:04:22.081881 | orchestrator | 2026-04-01 01:04:22 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:04:22.083259 | orchestrator | 2026-04-01 01:04:22 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:04:22.083659 | orchestrator | 2026-04-01 01:04:22 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:04:25.131115 | orchestrator | 2026-04-01 01:04:25 | INFO  | Task e9e6323b-4885-4268-880b-bad2b6ad5b20 is in state SUCCESS 2026-04-01 01:04:25.131994 | orchestrator | 2026-04-01 01:04:25.132040 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-01 01:04:25.132048 | orchestrator | 2.16.14 2026-04-01 01:04:25.132055 | orchestrator | 2026-04-01 01:04:25.132061 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2026-04-01 01:04:25.132068 | orchestrator | 2026-04-01 01:04:25.132074 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2026-04-01 01:04:25.132081 | orchestrator | Wednesday 01 April 2026 01:03:00 +0000 (0:00:00.231) 0:00:00.231 ******* 2026-04-01 01:04:25.132087 | orchestrator | changed: [testbed-manager] 2026-04-01 01:04:25.132094 | orchestrator | 2026-04-01 01:04:25.132099 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2026-04-01 01:04:25.132105 | orchestrator | Wednesday 01 April 2026 01:03:02 +0000 (0:00:02.058) 0:00:02.289 ******* 2026-04-01 01:04:25.132112 | orchestrator | changed: [testbed-manager] 2026-04-01 01:04:25.132118 | orchestrator | 2026-04-01 01:04:25.132124 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2026-04-01 01:04:25.132131 | orchestrator | Wednesday 01 April 2026 01:03:03 +0000 (0:00:01.192) 0:00:03.482 ******* 2026-04-01 01:04:25.132137 | orchestrator | changed: [testbed-manager] 2026-04-01 01:04:25.132143 | orchestrator | 2026-04-01 01:04:25.132150 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2026-04-01 01:04:25.132156 | orchestrator | Wednesday 01 April 2026 01:03:04 +0000 (0:00:01.005) 0:00:04.487 ******* 2026-04-01 01:04:25.132162 | orchestrator | changed: [testbed-manager] 2026-04-01 01:04:25.132168 | orchestrator | 2026-04-01 01:04:25.132174 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2026-04-01 01:04:25.132190 | orchestrator | Wednesday 01 April 2026 01:03:05 +0000 (0:00:01.202) 0:00:05.690 ******* 2026-04-01 01:04:25.132196 | orchestrator | changed: [testbed-manager] 2026-04-01 01:04:25.132203 | orchestrator | 2026-04-01 01:04:25.132208 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2026-04-01 01:04:25.132214 | orchestrator | Wednesday 01 April 2026 01:03:07 +0000 (0:00:02.254) 0:00:07.945 ******* 2026-04-01 01:04:25.132219 | orchestrator | changed: [testbed-manager] 2026-04-01 01:04:25.132225 | orchestrator | 2026-04-01 01:04:25.132231 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2026-04-01 01:04:25.132236 | orchestrator | Wednesday 01 April 2026 01:03:09 +0000 (0:00:01.319) 0:00:09.264 ******* 2026-04-01 01:04:25.132243 | orchestrator | changed: [testbed-manager] 2026-04-01 01:04:25.132249 | orchestrator | 2026-04-01 01:04:25.132255 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2026-04-01 01:04:25.132261 | orchestrator | Wednesday 01 April 2026 01:03:11 +0000 (0:00:02.013) 0:00:11.278 ******* 2026-04-01 01:04:25.132266 | orchestrator | changed: [testbed-manager] 2026-04-01 01:04:25.132273 | orchestrator | 2026-04-01 01:04:25.132279 | orchestrator | TASK [Create admin user] ******************************************************* 2026-04-01 01:04:25.132298 | orchestrator | Wednesday 01 April 2026 01:03:12 +0000 (0:00:01.046) 0:00:12.325 ******* 2026-04-01 01:04:25.132305 | orchestrator | changed: [testbed-manager] 2026-04-01 01:04:25.132311 | orchestrator | 2026-04-01 01:04:25.132317 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2026-04-01 01:04:25.132322 | orchestrator | Wednesday 01 April 2026 01:03:47 +0000 (0:00:34.936) 0:00:47.262 ******* 2026-04-01 01:04:25.132328 | orchestrator | skipping: [testbed-manager] 2026-04-01 01:04:25.132334 | orchestrator | 2026-04-01 01:04:25.132340 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-04-01 01:04:25.132346 | orchestrator | 2026-04-01 01:04:25.132352 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-04-01 01:04:25.132359 | orchestrator | Wednesday 01 April 2026 01:03:47 +0000 (0:00:00.128) 0:00:47.390 ******* 2026-04-01 01:04:25.132364 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:04:25.132370 | orchestrator | 2026-04-01 01:04:25.132377 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-04-01 01:04:25.132383 | orchestrator | 2026-04-01 01:04:25.132389 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-04-01 01:04:25.132395 | orchestrator | Wednesday 01 April 2026 01:03:49 +0000 (0:00:01.733) 0:00:49.124 ******* 2026-04-01 01:04:25.132401 | orchestrator | changed: [testbed-node-1] 2026-04-01 01:04:25.132407 | orchestrator | 2026-04-01 01:04:25.132412 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-04-01 01:04:25.132417 | orchestrator | 2026-04-01 01:04:25.132426 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-04-01 01:04:25.132431 | orchestrator | Wednesday 01 April 2026 01:04:00 +0000 (0:00:11.408) 0:01:00.532 ******* 2026-04-01 01:04:25.132436 | orchestrator | changed: [testbed-node-2] 2026-04-01 01:04:25.132442 | orchestrator | 2026-04-01 01:04:25.132449 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 01:04:25.132455 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-01 01:04:25.132462 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 01:04:25.132468 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 01:04:25.132474 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-01 01:04:25.132481 | orchestrator | 2026-04-01 01:04:25.132486 | orchestrator | 2026-04-01 01:04:25.132492 | orchestrator | 2026-04-01 01:04:25.132498 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 01:04:25.132504 | orchestrator | Wednesday 01 April 2026 01:04:11 +0000 (0:00:11.290) 0:01:11.823 ******* 2026-04-01 01:04:25.132509 | orchestrator | =============================================================================== 2026-04-01 01:04:25.132514 | orchestrator | Create admin user ------------------------------------------------------ 34.94s 2026-04-01 01:04:25.132531 | orchestrator | Restart ceph manager service ------------------------------------------- 24.43s 2026-04-01 01:04:25.132537 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 2.25s 2026-04-01 01:04:25.132543 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 2.06s 2026-04-01 01:04:25.132550 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.01s 2026-04-01 01:04:25.132556 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.32s 2026-04-01 01:04:25.132562 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.20s 2026-04-01 01:04:25.132568 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.20s 2026-04-01 01:04:25.132574 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.05s 2026-04-01 01:04:25.132588 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.00s 2026-04-01 01:04:25.132594 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.13s 2026-04-01 01:04:25.132601 | orchestrator | 2026-04-01 01:04:25.132606 | orchestrator | 2026-04-01 01:04:25.132613 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-01 01:04:25.132619 | orchestrator | 2026-04-01 01:04:25.132624 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-01 01:04:25.132630 | orchestrator | Wednesday 01 April 2026 01:04:09 +0000 (0:00:00.270) 0:00:00.270 ******* 2026-04-01 01:04:25.132636 | orchestrator | ok: [testbed-node-0] 2026-04-01 01:04:25.132642 | orchestrator | ok: [testbed-node-1] 2026-04-01 01:04:25.132648 | orchestrator | ok: [testbed-node-2] 2026-04-01 01:04:25.132654 | orchestrator | 2026-04-01 01:04:25.132663 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-01 01:04:25.132669 | orchestrator | Wednesday 01 April 2026 01:04:09 +0000 (0:00:00.263) 0:00:00.533 ******* 2026-04-01 01:04:25.132676 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-04-01 01:04:25.132681 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-04-01 01:04:25.132687 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-04-01 01:04:25.132693 | orchestrator | 2026-04-01 01:04:25.132699 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-04-01 01:04:25.132705 | orchestrator | 2026-04-01 01:04:25.132710 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-04-01 01:04:25.132716 | orchestrator | Wednesday 01 April 2026 01:04:09 +0000 (0:00:00.270) 0:00:00.804 ******* 2026-04-01 01:04:25.132722 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 01:04:25.132743 | orchestrator | 2026-04-01 01:04:25.132781 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-04-01 01:04:25.132787 | orchestrator | Wednesday 01 April 2026 01:04:10 +0000 (0:00:00.606) 0:00:01.410 ******* 2026-04-01 01:04:25.132796 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-01 01:04:25.132805 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-01 01:04:25.132817 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-01 01:04:25.132829 | orchestrator | 2026-04-01 01:04:25.132835 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-04-01 01:04:25.132841 | orchestrator | Wednesday 01 April 2026 01:04:11 +0000 (0:00:01.039) 0:00:02.450 ******* 2026-04-01 01:04:25.132847 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-01 01:04:25.132853 | orchestrator | 2026-04-01 01:04:25.132859 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-04-01 01:04:25.132864 | orchestrator | Wednesday 01 April 2026 01:04:12 +0000 (0:00:00.892) 0:00:03.342 ******* 2026-04-01 01:04:25.132870 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-01 01:04:25.132877 | orchestrator | 2026-04-01 01:04:25.132882 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-04-01 01:04:25.132888 | orchestrator | Wednesday 01 April 2026 01:04:12 +0000 (0:00:00.436) 0:00:03.779 ******* 2026-04-01 01:04:25.132896 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-01 01:04:25.132903 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-01 01:04:25.132909 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-01 01:04:25.132915 | orchestrator | 2026-04-01 01:04:25.132920 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-04-01 01:04:25.132926 | orchestrator | Wednesday 01 April 2026 01:04:14 +0000 (0:00:01.264) 0:00:05.044 ******* 2026-04-01 01:04:25.132936 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-01 01:04:25.132945 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:04:25.132951 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-01 01:04:25.132958 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:04:25.132966 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-01 01:04:25.132972 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:04:25.132978 | orchestrator | 2026-04-01 01:04:25.132983 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-04-01 01:04:25.132989 | orchestrator | Wednesday 01 April 2026 01:04:14 +0000 (0:00:00.384) 0:00:05.429 ******* 2026-04-01 01:04:25.132996 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-01 01:04:25.133002 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:04:25.133007 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-01 01:04:25.133015 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:04:25.133023 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-01 01:04:25.133028 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:04:25.133034 | orchestrator | 2026-04-01 01:04:25.133040 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-04-01 01:04:25.133046 | orchestrator | Wednesday 01 April 2026 01:04:14 +0000 (0:00:00.538) 0:00:05.967 ******* 2026-04-01 01:04:25.133052 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-01 01:04:25.133061 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-01 01:04:25.133067 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-01 01:04:25.133073 | orchestrator | 2026-04-01 01:04:25.133079 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-04-01 01:04:25.133088 | orchestrator | Wednesday 01 April 2026 01:04:15 +0000 (0:00:01.056) 0:00:07.024 ******* 2026-04-01 01:04:25.133094 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-01 01:04:25.133104 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-01 01:04:25.133111 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-01 01:04:25.133117 | orchestrator | 2026-04-01 01:04:25.133123 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-04-01 01:04:25.133131 | orchestrator | Wednesday 01 April 2026 01:04:17 +0000 (0:00:01.342) 0:00:08.366 ******* 2026-04-01 01:04:25.133137 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:04:25.133143 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:04:25.133149 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:04:25.133155 | orchestrator | 2026-04-01 01:04:25.133160 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-04-01 01:04:25.133166 | orchestrator | Wednesday 01 April 2026 01:04:17 +0000 (0:00:00.238) 0:00:08.605 ******* 2026-04-01 01:04:25.133172 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-04-01 01:04:25.133178 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-04-01 01:04:25.133183 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-04-01 01:04:25.133190 | orchestrator | 2026-04-01 01:04:25.133195 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-04-01 01:04:25.133201 | orchestrator | Wednesday 01 April 2026 01:04:18 +0000 (0:00:01.054) 0:00:09.659 ******* 2026-04-01 01:04:25.133207 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-04-01 01:04:25.133214 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-04-01 01:04:25.133303 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-04-01 01:04:25.133314 | orchestrator | 2026-04-01 01:04:25.133320 | orchestrator | TASK [grafana : Check if the folder for custom grafana dashboards exists] ****** 2026-04-01 01:04:25.133326 | orchestrator | Wednesday 01 April 2026 01:04:19 +0000 (0:00:01.034) 0:00:10.694 ******* 2026-04-01 01:04:25.133332 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-01 01:04:25.133338 | orchestrator | 2026-04-01 01:04:25.133344 | orchestrator | TASK [grafana : Remove templated Grafana dashboards] *************************** 2026-04-01 01:04:25.133350 | orchestrator | Wednesday 01 April 2026 01:04:20 +0000 (0:00:00.695) 0:00:11.389 ******* 2026-04-01 01:04:25.133356 | orchestrator | ok: [testbed-node-0] 2026-04-01 01:04:25.133362 | orchestrator | ok: [testbed-node-1] 2026-04-01 01:04:25.133368 | orchestrator | ok: [testbed-node-2] 2026-04-01 01:04:25.133374 | orchestrator | 2026-04-01 01:04:25.133380 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-04-01 01:04:25.133386 | orchestrator | Wednesday 01 April 2026 01:04:21 +0000 (0:00:00.773) 0:00:12.162 ******* 2026-04-01 01:04:25.133392 | orchestrator | changed: [testbed-node-0] 2026-04-01 01:04:25.133398 | orchestrator | changed: [testbed-node-1] 2026-04-01 01:04:25.133404 | orchestrator | changed: [testbed-node-2] 2026-04-01 01:04:25.133410 | orchestrator | 2026-04-01 01:04:25.133415 | orchestrator | TASK [service-check-containers : grafana | Check containers] ******************* 2026-04-01 01:04:25.133420 | orchestrator | Wednesday 01 April 2026 01:04:22 +0000 (0:00:01.038) 0:00:13.201 ******* 2026-04-01 01:04:25.133425 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-01 01:04:25.133436 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-01 01:04:25.133446 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-01 01:04:25.133460 | orchestrator | 2026-04-01 01:04:25.133467 | orchestrator | TASK [service-check-containers : grafana | Notify handlers to restart containers] *** 2026-04-01 01:04:25.133473 | orchestrator | Wednesday 01 April 2026 01:04:23 +0000 (0:00:00.847) 0:00:14.048 ******* 2026-04-01 01:04:25.133479 | orchestrator | changed: [testbed-node-0] => { 2026-04-01 01:04:25.133485 | orchestrator |  "msg": "Notifying handlers" 2026-04-01 01:04:25.133491 | orchestrator | } 2026-04-01 01:04:25.133496 | orchestrator | changed: [testbed-node-1] => { 2026-04-01 01:04:25.133502 | orchestrator |  "msg": "Notifying handlers" 2026-04-01 01:04:25.133508 | orchestrator | } 2026-04-01 01:04:25.133515 | orchestrator | changed: [testbed-node-2] => { 2026-04-01 01:04:25.133520 | orchestrator |  "msg": "Notifying handlers" 2026-04-01 01:04:25.133526 | orchestrator | } 2026-04-01 01:04:25.133532 | orchestrator | 2026-04-01 01:04:25.133537 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-01 01:04:25.133543 | orchestrator | Wednesday 01 April 2026 01:04:23 +0000 (0:00:00.284) 0:00:14.333 ******* 2026-04-01 01:04:25.133549 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-01 01:04:25.133555 | orchestrator | skipping: [testbed-node-0] 2026-04-01 01:04:25.133561 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-01 01:04:25.133568 | orchestrator | skipping: [testbed-node-1] 2026-04-01 01:04:25.133578 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-01 01:04:25.133585 | orchestrator | skipping: [testbed-node-2] 2026-04-01 01:04:25.133592 | orchestrator | 2026-04-01 01:04:25.133598 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-04-01 01:04:25.133604 | orchestrator | Wednesday 01 April 2026 01:04:23 +0000 (0:00:00.673) 0:00:15.007 ******* 2026-04-01 01:04:25.133610 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "msg": "kolla_toolbox container is missing or not running!"} 2026-04-01 01:04:25.133647 | orchestrator | 2026-04-01 01:04:25.133655 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-01 01:04:25.133662 | orchestrator | testbed-node-0 : ok=16  changed=9  unreachable=0 failed=1  skipped=4  rescued=0 ignored=0 2026-04-01 01:04:25.133668 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-01 01:04:25.133678 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-01 01:04:25.133684 | orchestrator | 2026-04-01 01:04:25.133690 | orchestrator | 2026-04-01 01:04:25.133696 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-01 01:04:25.133703 | orchestrator | Wednesday 01 April 2026 01:04:24 +0000 (0:00:00.616) 0:00:15.623 ******* 2026-04-01 01:04:25.133709 | orchestrator | =============================================================================== 2026-04-01 01:04:25.133715 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.34s 2026-04-01 01:04:25.133721 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.26s 2026-04-01 01:04:25.133764 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.06s 2026-04-01 01:04:25.133771 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.05s 2026-04-01 01:04:25.133776 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 1.04s 2026-04-01 01:04:25.133782 | orchestrator | grafana : Copying over custom dashboards -------------------------------- 1.04s 2026-04-01 01:04:25.133788 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.03s 2026-04-01 01:04:25.133793 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.89s 2026-04-01 01:04:25.133799 | orchestrator | service-check-containers : grafana | Check containers ------------------- 0.85s 2026-04-01 01:04:25.133805 | orchestrator | grafana : Remove templated Grafana dashboards --------------------------- 0.77s 2026-04-01 01:04:25.133811 | orchestrator | grafana : Check if the folder for custom grafana dashboards exists ------ 0.70s 2026-04-01 01:04:25.133817 | orchestrator | service-check-containers : Include tasks -------------------------------- 0.67s 2026-04-01 01:04:25.133823 | orchestrator | grafana : Creating grafana database ------------------------------------- 0.62s 2026-04-01 01:04:25.133828 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.61s 2026-04-01 01:04:25.133834 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.54s 2026-04-01 01:04:25.133840 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.44s 2026-04-01 01:04:25.133846 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS certificate --- 0.38s 2026-04-01 01:04:25.133852 | orchestrator | service-check-containers : grafana | Notify handlers to restart containers --- 0.28s 2026-04-01 01:04:25.133857 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.27s 2026-04-01 01:04:25.133863 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.26s 2026-04-01 01:04:25.133869 | orchestrator | 2026-04-01 01:04:25 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:04:25.133874 | orchestrator | 2026-04-01 01:04:25 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:04:25.133880 | orchestrator | 2026-04-01 01:04:25 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:04:28.174775 | orchestrator | 2026-04-01 01:04:28 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:04:28.177316 | orchestrator | 2026-04-01 01:04:28 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:04:28.177764 | orchestrator | 2026-04-01 01:04:28 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:04:31.219877 | orchestrator | 2026-04-01 01:04:31 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:04:31.221557 | orchestrator | 2026-04-01 01:04:31 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:04:31.221601 | orchestrator | 2026-04-01 01:04:31 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:04:34.269429 | orchestrator | 2026-04-01 01:04:34 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:04:34.271241 | orchestrator | 2026-04-01 01:04:34 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:04:34.271300 | orchestrator | 2026-04-01 01:04:34 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:04:37.313064 | orchestrator | 2026-04-01 01:04:37 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:04:37.316228 | orchestrator | 2026-04-01 01:04:37 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:04:37.316271 | orchestrator | 2026-04-01 01:04:37 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:04:40.355967 | orchestrator | 2026-04-01 01:04:40 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:04:40.357917 | orchestrator | 2026-04-01 01:04:40 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:04:40.358194 | orchestrator | 2026-04-01 01:04:40 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:04:43.398804 | orchestrator | 2026-04-01 01:04:43 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:04:43.400578 | orchestrator | 2026-04-01 01:04:43 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:04:43.400637 | orchestrator | 2026-04-01 01:04:43 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:04:46.443592 | orchestrator | 2026-04-01 01:04:46 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:04:46.445348 | orchestrator | 2026-04-01 01:04:46 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:04:46.445403 | orchestrator | 2026-04-01 01:04:46 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:04:49.486769 | orchestrator | 2026-04-01 01:04:49 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:04:49.488378 | orchestrator | 2026-04-01 01:04:49 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:04:49.488430 | orchestrator | 2026-04-01 01:04:49 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:04:52.530187 | orchestrator | 2026-04-01 01:04:52 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:04:52.531627 | orchestrator | 2026-04-01 01:04:52 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:04:52.531680 | orchestrator | 2026-04-01 01:04:52 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:04:55.578234 | orchestrator | 2026-04-01 01:04:55 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:04:55.580500 | orchestrator | 2026-04-01 01:04:55 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:04:55.580550 | orchestrator | 2026-04-01 01:04:55 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:04:58.621476 | orchestrator | 2026-04-01 01:04:58 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:04:58.623209 | orchestrator | 2026-04-01 01:04:58 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:04:58.623283 | orchestrator | 2026-04-01 01:04:58 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:05:01.660118 | orchestrator | 2026-04-01 01:05:01 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:05:01.660903 | orchestrator | 2026-04-01 01:05:01 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:05:01.660947 | orchestrator | 2026-04-01 01:05:01 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:05:04.712122 | orchestrator | 2026-04-01 01:05:04 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:05:04.714966 | orchestrator | 2026-04-01 01:05:04 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:05:04.715406 | orchestrator | 2026-04-01 01:05:04 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:05:07.757145 | orchestrator | 2026-04-01 01:05:07 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:05:07.758376 | orchestrator | 2026-04-01 01:05:07 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:05:07.758467 | orchestrator | 2026-04-01 01:05:07 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:05:10.798212 | orchestrator | 2026-04-01 01:05:10 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:05:10.800348 | orchestrator | 2026-04-01 01:05:10 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:05:10.800401 | orchestrator | 2026-04-01 01:05:10 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:05:13.848191 | orchestrator | 2026-04-01 01:05:13 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:05:13.849939 | orchestrator | 2026-04-01 01:05:13 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:05:13.849980 | orchestrator | 2026-04-01 01:05:13 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:05:16.899472 | orchestrator | 2026-04-01 01:05:16 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:05:16.900717 | orchestrator | 2026-04-01 01:05:16 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:05:16.900768 | orchestrator | 2026-04-01 01:05:16 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:05:19.943159 | orchestrator | 2026-04-01 01:05:19 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:05:19.944149 | orchestrator | 2026-04-01 01:05:19 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:05:19.944185 | orchestrator | 2026-04-01 01:05:19 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:05:22.984820 | orchestrator | 2026-04-01 01:05:22 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:05:22.985970 | orchestrator | 2026-04-01 01:05:22 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:05:22.986006 | orchestrator | 2026-04-01 01:05:22 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:05:26.033759 | orchestrator | 2026-04-01 01:05:26 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:05:26.034253 | orchestrator | 2026-04-01 01:05:26 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:05:26.034290 | orchestrator | 2026-04-01 01:05:26 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:05:29.070832 | orchestrator | 2026-04-01 01:05:29 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:05:29.072491 | orchestrator | 2026-04-01 01:05:29 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:05:29.072543 | orchestrator | 2026-04-01 01:05:29 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:05:32.111107 | orchestrator | 2026-04-01 01:05:32 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:05:32.112784 | orchestrator | 2026-04-01 01:05:32 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:05:32.112846 | orchestrator | 2026-04-01 01:05:32 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:05:35.155117 | orchestrator | 2026-04-01 01:05:35 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:05:35.159366 | orchestrator | 2026-04-01 01:05:35 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:05:35.159416 | orchestrator | 2026-04-01 01:05:35 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:05:38.212285 | orchestrator | 2026-04-01 01:05:38 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:05:38.214199 | orchestrator | 2026-04-01 01:05:38 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:05:38.214247 | orchestrator | 2026-04-01 01:05:38 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:05:41.254548 | orchestrator | 2026-04-01 01:05:41 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:05:41.256116 | orchestrator | 2026-04-01 01:05:41 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:05:41.256154 | orchestrator | 2026-04-01 01:05:41 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:05:44.299784 | orchestrator | 2026-04-01 01:05:44 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:05:44.301680 | orchestrator | 2026-04-01 01:05:44 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:05:44.301745 | orchestrator | 2026-04-01 01:05:44 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:05:47.354826 | orchestrator | 2026-04-01 01:05:47 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:05:47.356957 | orchestrator | 2026-04-01 01:05:47 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:05:47.356994 | orchestrator | 2026-04-01 01:05:47 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:05:50.401496 | orchestrator | 2026-04-01 01:05:50 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:05:50.403871 | orchestrator | 2026-04-01 01:05:50 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:05:50.403938 | orchestrator | 2026-04-01 01:05:50 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:05:53.442517 | orchestrator | 2026-04-01 01:05:53 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:05:53.446235 | orchestrator | 2026-04-01 01:05:53 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:05:53.446277 | orchestrator | 2026-04-01 01:05:53 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:05:56.489020 | orchestrator | 2026-04-01 01:05:56 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:05:56.491205 | orchestrator | 2026-04-01 01:05:56 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:05:56.491262 | orchestrator | 2026-04-01 01:05:56 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:05:59.534524 | orchestrator | 2026-04-01 01:05:59 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:05:59.536175 | orchestrator | 2026-04-01 01:05:59 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:05:59.536237 | orchestrator | 2026-04-01 01:05:59 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:06:02.580859 | orchestrator | 2026-04-01 01:06:02 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:06:02.583126 | orchestrator | 2026-04-01 01:06:02 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:06:02.583228 | orchestrator | 2026-04-01 01:06:02 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:06:05.625367 | orchestrator | 2026-04-01 01:06:05 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:06:05.626475 | orchestrator | 2026-04-01 01:06:05 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:06:05.626518 | orchestrator | 2026-04-01 01:06:05 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:06:08.670252 | orchestrator | 2026-04-01 01:06:08 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:06:08.671657 | orchestrator | 2026-04-01 01:06:08 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:06:08.671711 | orchestrator | 2026-04-01 01:06:08 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:06:11.711627 | orchestrator | 2026-04-01 01:06:11 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:06:11.713297 | orchestrator | 2026-04-01 01:06:11 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:06:11.713351 | orchestrator | 2026-04-01 01:06:11 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:06:14.761692 | orchestrator | 2026-04-01 01:06:14 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:06:14.763798 | orchestrator | 2026-04-01 01:06:14 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:06:14.763845 | orchestrator | 2026-04-01 01:06:14 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:06:17.812365 | orchestrator | 2026-04-01 01:06:17 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:06:17.813967 | orchestrator | 2026-04-01 01:06:17 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:06:17.814113 | orchestrator | 2026-04-01 01:06:17 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:06:20.859429 | orchestrator | 2026-04-01 01:06:20 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:06:20.863176 | orchestrator | 2026-04-01 01:06:20 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:06:20.863244 | orchestrator | 2026-04-01 01:06:20 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:06:23.906187 | orchestrator | 2026-04-01 01:06:23 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:06:23.908380 | orchestrator | 2026-04-01 01:06:23 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:06:23.908473 | orchestrator | 2026-04-01 01:06:23 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:06:26.954600 | orchestrator | 2026-04-01 01:06:26 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:06:26.956121 | orchestrator | 2026-04-01 01:06:26 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:06:26.956178 | orchestrator | 2026-04-01 01:06:26 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:06:30.007245 | orchestrator | 2026-04-01 01:06:30 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:06:30.009650 | orchestrator | 2026-04-01 01:06:30 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:06:30.010169 | orchestrator | 2026-04-01 01:06:30 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:06:33.059781 | orchestrator | 2026-04-01 01:06:33 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:06:33.061346 | orchestrator | 2026-04-01 01:06:33 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:06:33.061391 | orchestrator | 2026-04-01 01:06:33 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:06:36.108912 | orchestrator | 2026-04-01 01:06:36 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:06:36.111093 | orchestrator | 2026-04-01 01:06:36 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:06:36.111148 | orchestrator | 2026-04-01 01:06:36 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:06:39.157894 | orchestrator | 2026-04-01 01:06:39 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:06:39.159508 | orchestrator | 2026-04-01 01:06:39 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:06:39.159555 | orchestrator | 2026-04-01 01:06:39 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:06:42.207197 | orchestrator | 2026-04-01 01:06:42 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:06:42.209519 | orchestrator | 2026-04-01 01:06:42 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:06:42.209572 | orchestrator | 2026-04-01 01:06:42 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:06:45.261622 | orchestrator | 2026-04-01 01:06:45 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:06:45.263440 | orchestrator | 2026-04-01 01:06:45 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:06:45.263480 | orchestrator | 2026-04-01 01:06:45 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:06:48.316482 | orchestrator | 2026-04-01 01:06:48 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:06:48.317739 | orchestrator | 2026-04-01 01:06:48 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:06:48.317874 | orchestrator | 2026-04-01 01:06:48 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:06:51.362490 | orchestrator | 2026-04-01 01:06:51 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:06:51.364689 | orchestrator | 2026-04-01 01:06:51 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:06:51.364806 | orchestrator | 2026-04-01 01:06:51 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:06:54.411718 | orchestrator | 2026-04-01 01:06:54 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:06:54.414312 | orchestrator | 2026-04-01 01:06:54 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:06:54.414393 | orchestrator | 2026-04-01 01:06:54 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:06:57.455306 | orchestrator | 2026-04-01 01:06:57 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:06:57.456524 | orchestrator | 2026-04-01 01:06:57 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:06:57.456594 | orchestrator | 2026-04-01 01:06:57 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:07:00.502914 | orchestrator | 2026-04-01 01:07:00 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:07:00.504766 | orchestrator | 2026-04-01 01:07:00 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:07:00.504873 | orchestrator | 2026-04-01 01:07:00 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:07:03.547856 | orchestrator | 2026-04-01 01:07:03 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:07:03.549878 | orchestrator | 2026-04-01 01:07:03 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:07:03.549926 | orchestrator | 2026-04-01 01:07:03 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:07:06.589582 | orchestrator | 2026-04-01 01:07:06 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:07:06.592073 | orchestrator | 2026-04-01 01:07:06 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:07:06.592141 | orchestrator | 2026-04-01 01:07:06 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:07:09.629451 | orchestrator | 2026-04-01 01:07:09 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:07:09.630459 | orchestrator | 2026-04-01 01:07:09 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:07:09.630516 | orchestrator | 2026-04-01 01:07:09 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:07:12.672944 | orchestrator | 2026-04-01 01:07:12 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:07:12.674287 | orchestrator | 2026-04-01 01:07:12 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:07:12.674331 | orchestrator | 2026-04-01 01:07:12 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:07:15.721839 | orchestrator | 2026-04-01 01:07:15 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:07:15.725227 | orchestrator | 2026-04-01 01:07:15 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:07:15.725289 | orchestrator | 2026-04-01 01:07:15 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:07:18.776458 | orchestrator | 2026-04-01 01:07:18 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:07:18.778750 | orchestrator | 2026-04-01 01:07:18 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:07:18.778923 | orchestrator | 2026-04-01 01:07:18 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:07:21.820580 | orchestrator | 2026-04-01 01:07:21 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:07:21.822310 | orchestrator | 2026-04-01 01:07:21 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:07:21.822420 | orchestrator | 2026-04-01 01:07:21 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:07:24.868504 | orchestrator | 2026-04-01 01:07:24 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:07:24.869524 | orchestrator | 2026-04-01 01:07:24 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:07:24.869650 | orchestrator | 2026-04-01 01:07:24 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:07:27.921017 | orchestrator | 2026-04-01 01:07:27 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:07:27.923495 | orchestrator | 2026-04-01 01:07:27 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:07:27.923583 | orchestrator | 2026-04-01 01:07:27 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:07:30.966208 | orchestrator | 2026-04-01 01:07:30 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:07:30.967245 | orchestrator | 2026-04-01 01:07:30 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:07:30.967261 | orchestrator | 2026-04-01 01:07:30 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:07:34.020844 | orchestrator | 2026-04-01 01:07:34 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:07:34.022231 | orchestrator | 2026-04-01 01:07:34 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:07:34.022341 | orchestrator | 2026-04-01 01:07:34 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:07:37.064458 | orchestrator | 2026-04-01 01:07:37 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:07:37.066800 | orchestrator | 2026-04-01 01:07:37 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:07:37.066852 | orchestrator | 2026-04-01 01:07:37 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:07:40.113861 | orchestrator | 2026-04-01 01:07:40 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:07:40.116613 | orchestrator | 2026-04-01 01:07:40 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:07:40.116701 | orchestrator | 2026-04-01 01:07:40 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:07:43.158206 | orchestrator | 2026-04-01 01:07:43 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:07:43.158692 | orchestrator | 2026-04-01 01:07:43 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:07:43.158713 | orchestrator | 2026-04-01 01:07:43 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:07:46.204660 | orchestrator | 2026-04-01 01:07:46 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:07:46.205777 | orchestrator | 2026-04-01 01:07:46 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:07:46.205938 | orchestrator | 2026-04-01 01:07:46 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:07:49.246099 | orchestrator | 2026-04-01 01:07:49 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:07:49.247003 | orchestrator | 2026-04-01 01:07:49 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:07:49.247028 | orchestrator | 2026-04-01 01:07:49 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:07:52.287329 | orchestrator | 2026-04-01 01:07:52 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:07:52.289847 | orchestrator | 2026-04-01 01:07:52 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:07:52.289918 | orchestrator | 2026-04-01 01:07:52 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:07:55.334529 | orchestrator | 2026-04-01 01:07:55 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:07:55.336177 | orchestrator | 2026-04-01 01:07:55 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:07:55.336331 | orchestrator | 2026-04-01 01:07:55 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:07:58.382937 | orchestrator | 2026-04-01 01:07:58 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:07:58.383540 | orchestrator | 2026-04-01 01:07:58 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:07:58.383579 | orchestrator | 2026-04-01 01:07:58 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:08:01.428566 | orchestrator | 2026-04-01 01:08:01 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:08:01.429945 | orchestrator | 2026-04-01 01:08:01 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:08:01.430145 | orchestrator | 2026-04-01 01:08:01 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:08:04.475460 | orchestrator | 2026-04-01 01:08:04 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:08:04.477205 | orchestrator | 2026-04-01 01:08:04 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:08:04.477262 | orchestrator | 2026-04-01 01:08:04 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:08:07.523913 | orchestrator | 2026-04-01 01:08:07 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:08:07.526217 | orchestrator | 2026-04-01 01:08:07 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:08:07.526264 | orchestrator | 2026-04-01 01:08:07 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:08:10.566578 | orchestrator | 2026-04-01 01:08:10 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:08:10.568784 | orchestrator | 2026-04-01 01:08:10 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:08:10.568905 | orchestrator | 2026-04-01 01:08:10 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:08:13.619479 | orchestrator | 2026-04-01 01:08:13 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:08:13.623780 | orchestrator | 2026-04-01 01:08:13 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:08:13.623873 | orchestrator | 2026-04-01 01:08:13 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:08:16.667279 | orchestrator | 2026-04-01 01:08:16 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:08:16.668590 | orchestrator | 2026-04-01 01:08:16 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:08:16.668722 | orchestrator | 2026-04-01 01:08:16 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:08:19.712071 | orchestrator | 2026-04-01 01:08:19 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:08:19.713457 | orchestrator | 2026-04-01 01:08:19 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:08:19.713565 | orchestrator | 2026-04-01 01:08:19 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:08:22.758148 | orchestrator | 2026-04-01 01:08:22 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:08:22.760434 | orchestrator | 2026-04-01 01:08:22 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:08:22.760487 | orchestrator | 2026-04-01 01:08:22 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:08:25.804950 | orchestrator | 2026-04-01 01:08:25 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:08:25.806799 | orchestrator | 2026-04-01 01:08:25 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:08:25.806875 | orchestrator | 2026-04-01 01:08:25 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:08:28.843405 | orchestrator | 2026-04-01 01:08:28 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:08:28.845873 | orchestrator | 2026-04-01 01:08:28 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:08:28.846152 | orchestrator | 2026-04-01 01:08:28 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:08:31.893663 | orchestrator | 2026-04-01 01:08:31 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:08:31.896226 | orchestrator | 2026-04-01 01:08:31 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:08:31.896292 | orchestrator | 2026-04-01 01:08:31 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:08:34.948004 | orchestrator | 2026-04-01 01:08:34 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:08:34.950386 | orchestrator | 2026-04-01 01:08:34 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:08:34.950683 | orchestrator | 2026-04-01 01:08:34 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:08:38.001297 | orchestrator | 2026-04-01 01:08:38 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:08:38.002331 | orchestrator | 2026-04-01 01:08:38 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:08:38.002391 | orchestrator | 2026-04-01 01:08:38 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:08:41.047056 | orchestrator | 2026-04-01 01:08:41 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:08:41.049070 | orchestrator | 2026-04-01 01:08:41 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:08:41.049179 | orchestrator | 2026-04-01 01:08:41 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:08:44.090650 | orchestrator | 2026-04-01 01:08:44 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:08:44.092074 | orchestrator | 2026-04-01 01:08:44 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:08:44.092219 | orchestrator | 2026-04-01 01:08:44 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:08:47.137839 | orchestrator | 2026-04-01 01:08:47 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:08:47.139841 | orchestrator | 2026-04-01 01:08:47 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:08:47.139901 | orchestrator | 2026-04-01 01:08:47 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:08:50.184746 | orchestrator | 2026-04-01 01:08:50 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:08:50.187582 | orchestrator | 2026-04-01 01:08:50 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:08:50.187653 | orchestrator | 2026-04-01 01:08:50 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:08:53.233472 | orchestrator | 2026-04-01 01:08:53 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:08:53.235130 | orchestrator | 2026-04-01 01:08:53 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:08:53.235200 | orchestrator | 2026-04-01 01:08:53 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:08:56.280134 | orchestrator | 2026-04-01 01:08:56 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:08:56.282102 | orchestrator | 2026-04-01 01:08:56 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:08:56.282190 | orchestrator | 2026-04-01 01:08:56 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:08:59.324079 | orchestrator | 2026-04-01 01:08:59 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:08:59.326385 | orchestrator | 2026-04-01 01:08:59 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:08:59.326466 | orchestrator | 2026-04-01 01:08:59 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:09:02.372796 | orchestrator | 2026-04-01 01:09:02 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:09:02.374955 | orchestrator | 2026-04-01 01:09:02 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:09:02.375056 | orchestrator | 2026-04-01 01:09:02 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:09:05.423445 | orchestrator | 2026-04-01 01:09:05 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:09:05.425832 | orchestrator | 2026-04-01 01:09:05 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:09:05.425976 | orchestrator | 2026-04-01 01:09:05 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:09:08.474279 | orchestrator | 2026-04-01 01:09:08 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:09:08.475990 | orchestrator | 2026-04-01 01:09:08 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:09:08.476051 | orchestrator | 2026-04-01 01:09:08 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:09:11.525529 | orchestrator | 2026-04-01 01:09:11 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:09:11.527828 | orchestrator | 2026-04-01 01:09:11 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:09:11.528390 | orchestrator | 2026-04-01 01:09:11 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:09:14.575936 | orchestrator | 2026-04-01 01:09:14 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:09:14.577558 | orchestrator | 2026-04-01 01:09:14 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:09:14.577627 | orchestrator | 2026-04-01 01:09:14 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:09:17.620688 | orchestrator | 2026-04-01 01:09:17 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:09:17.622220 | orchestrator | 2026-04-01 01:09:17 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:09:17.622273 | orchestrator | 2026-04-01 01:09:17 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:09:20.667467 | orchestrator | 2026-04-01 01:09:20 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:09:20.670322 | orchestrator | 2026-04-01 01:09:20 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:09:20.670365 | orchestrator | 2026-04-01 01:09:20 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:09:23.717310 | orchestrator | 2026-04-01 01:09:23 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:09:23.718780 | orchestrator | 2026-04-01 01:09:23 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:09:23.718883 | orchestrator | 2026-04-01 01:09:23 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:09:26.760820 | orchestrator | 2026-04-01 01:09:26 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:09:26.762695 | orchestrator | 2026-04-01 01:09:26 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:09:26.762942 | orchestrator | 2026-04-01 01:09:26 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:09:29.804069 | orchestrator | 2026-04-01 01:09:29 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:09:29.805398 | orchestrator | 2026-04-01 01:09:29 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:09:29.805436 | orchestrator | 2026-04-01 01:09:29 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:09:32.846517 | orchestrator | 2026-04-01 01:09:32 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:09:32.848579 | orchestrator | 2026-04-01 01:09:32 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:09:32.848644 | orchestrator | 2026-04-01 01:09:32 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:09:35.898532 | orchestrator | 2026-04-01 01:09:35 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:09:35.900284 | orchestrator | 2026-04-01 01:09:35 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:09:35.900352 | orchestrator | 2026-04-01 01:09:35 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:09:38.953912 | orchestrator | 2026-04-01 01:09:38 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:09:38.956285 | orchestrator | 2026-04-01 01:09:38 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:09:38.956366 | orchestrator | 2026-04-01 01:09:38 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:09:41.995440 | orchestrator | 2026-04-01 01:09:41 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:09:41.998144 | orchestrator | 2026-04-01 01:09:42 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:09:41.998269 | orchestrator | 2026-04-01 01:09:42 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:09:45.047374 | orchestrator | 2026-04-01 01:09:45 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:09:45.050247 | orchestrator | 2026-04-01 01:09:45 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:09:45.050325 | orchestrator | 2026-04-01 01:09:45 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:09:48.091382 | orchestrator | 2026-04-01 01:09:48 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:09:48.092758 | orchestrator | 2026-04-01 01:09:48 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:09:48.092828 | orchestrator | 2026-04-01 01:09:48 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:09:51.133328 | orchestrator | 2026-04-01 01:09:51 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:09:51.134800 | orchestrator | 2026-04-01 01:09:51 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:09:51.134862 | orchestrator | 2026-04-01 01:09:51 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:09:54.178509 | orchestrator | 2026-04-01 01:09:54 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:09:54.179812 | orchestrator | 2026-04-01 01:09:54 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:09:54.179895 | orchestrator | 2026-04-01 01:09:54 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:09:57.223721 | orchestrator | 2026-04-01 01:09:57 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:09:57.225694 | orchestrator | 2026-04-01 01:09:57 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:09:57.225814 | orchestrator | 2026-04-01 01:09:57 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:10:00.270523 | orchestrator | 2026-04-01 01:10:00 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:10:00.274248 | orchestrator | 2026-04-01 01:10:00 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:10:00.274446 | orchestrator | 2026-04-01 01:10:00 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:10:03.324123 | orchestrator | 2026-04-01 01:10:03 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:10:03.326523 | orchestrator | 2026-04-01 01:10:03 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:10:03.326621 | orchestrator | 2026-04-01 01:10:03 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:10:06.373402 | orchestrator | 2026-04-01 01:10:06 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:10:06.374538 | orchestrator | 2026-04-01 01:10:06 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:10:06.374670 | orchestrator | 2026-04-01 01:10:06 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:10:09.420774 | orchestrator | 2026-04-01 01:10:09 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:10:09.422737 | orchestrator | 2026-04-01 01:10:09 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:10:09.422792 | orchestrator | 2026-04-01 01:10:09 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:10:12.470774 | orchestrator | 2026-04-01 01:10:12 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:10:12.472419 | orchestrator | 2026-04-01 01:10:12 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:10:12.472472 | orchestrator | 2026-04-01 01:10:12 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:10:15.518177 | orchestrator | 2026-04-01 01:10:15 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:10:15.519406 | orchestrator | 2026-04-01 01:10:15 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:10:15.519697 | orchestrator | 2026-04-01 01:10:15 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:10:18.561500 | orchestrator | 2026-04-01 01:10:18 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:10:18.562891 | orchestrator | 2026-04-01 01:10:18 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:10:18.562958 | orchestrator | 2026-04-01 01:10:18 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:10:21.612139 | orchestrator | 2026-04-01 01:10:21 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:10:21.614288 | orchestrator | 2026-04-01 01:10:21 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:10:21.614369 | orchestrator | 2026-04-01 01:10:21 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:10:24.661181 | orchestrator | 2026-04-01 01:10:24 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:10:24.664426 | orchestrator | 2026-04-01 01:10:24 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:10:24.664514 | orchestrator | 2026-04-01 01:10:24 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:10:27.711009 | orchestrator | 2026-04-01 01:10:27 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:10:27.713874 | orchestrator | 2026-04-01 01:10:27 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:10:27.713952 | orchestrator | 2026-04-01 01:10:27 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:10:30.761853 | orchestrator | 2026-04-01 01:10:30 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:10:30.763446 | orchestrator | 2026-04-01 01:10:30 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:10:30.763499 | orchestrator | 2026-04-01 01:10:30 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:10:33.809893 | orchestrator | 2026-04-01 01:10:33 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:10:33.810727 | orchestrator | 2026-04-01 01:10:33 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:10:33.810843 | orchestrator | 2026-04-01 01:10:33 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:10:36.858966 | orchestrator | 2026-04-01 01:10:36 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:10:36.860060 | orchestrator | 2026-04-01 01:10:36 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:10:36.860388 | orchestrator | 2026-04-01 01:10:36 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:10:39.902851 | orchestrator | 2026-04-01 01:10:39 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:10:39.903763 | orchestrator | 2026-04-01 01:10:39 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:10:39.903813 | orchestrator | 2026-04-01 01:10:39 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:10:42.949594 | orchestrator | 2026-04-01 01:10:42 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:10:42.950960 | orchestrator | 2026-04-01 01:10:42 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:10:42.951020 | orchestrator | 2026-04-01 01:10:42 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:10:46.006929 | orchestrator | 2026-04-01 01:10:46 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:10:46.010414 | orchestrator | 2026-04-01 01:10:46 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:10:46.010513 | orchestrator | 2026-04-01 01:10:46 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:10:49.057206 | orchestrator | 2026-04-01 01:10:49 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:10:49.057460 | orchestrator | 2026-04-01 01:10:49 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:10:49.057768 | orchestrator | 2026-04-01 01:10:49 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:10:52.106424 | orchestrator | 2026-04-01 01:10:52 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:10:52.108112 | orchestrator | 2026-04-01 01:10:52 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:10:52.108201 | orchestrator | 2026-04-01 01:10:52 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:10:55.156166 | orchestrator | 2026-04-01 01:10:55 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:10:55.157421 | orchestrator | 2026-04-01 01:10:55 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:10:55.157485 | orchestrator | 2026-04-01 01:10:55 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:10:58.203845 | orchestrator | 2026-04-01 01:10:58 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:10:58.205117 | orchestrator | 2026-04-01 01:10:58 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:10:58.205162 | orchestrator | 2026-04-01 01:10:58 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:11:01.247989 | orchestrator | 2026-04-01 01:11:01 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:11:01.249724 | orchestrator | 2026-04-01 01:11:01 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:11:01.249768 | orchestrator | 2026-04-01 01:11:01 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:11:04.295561 | orchestrator | 2026-04-01 01:11:04 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:11:04.296784 | orchestrator | 2026-04-01 01:11:04 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:11:04.296862 | orchestrator | 2026-04-01 01:11:04 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:11:07.345753 | orchestrator | 2026-04-01 01:11:07 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:11:07.349469 | orchestrator | 2026-04-01 01:11:07 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:11:07.349939 | orchestrator | 2026-04-01 01:11:07 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:11:10.391141 | orchestrator | 2026-04-01 01:11:10 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:11:10.393739 | orchestrator | 2026-04-01 01:11:10 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:11:10.393800 | orchestrator | 2026-04-01 01:11:10 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:11:13.445605 | orchestrator | 2026-04-01 01:11:13 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:11:13.447816 | orchestrator | 2026-04-01 01:11:13 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:11:13.447883 | orchestrator | 2026-04-01 01:11:13 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:11:16.501085 | orchestrator | 2026-04-01 01:11:16 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:11:16.501924 | orchestrator | 2026-04-01 01:11:16 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:11:16.502177 | orchestrator | 2026-04-01 01:11:16 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:11:19.552520 | orchestrator | 2026-04-01 01:11:19 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:11:19.553582 | orchestrator | 2026-04-01 01:11:19 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:11:19.554119 | orchestrator | 2026-04-01 01:11:19 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:11:22.604391 | orchestrator | 2026-04-01 01:11:22 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:11:22.606155 | orchestrator | 2026-04-01 01:11:22 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:11:22.606251 | orchestrator | 2026-04-01 01:11:22 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:11:25.653151 | orchestrator | 2026-04-01 01:11:25 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:11:25.655054 | orchestrator | 2026-04-01 01:11:25 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:11:25.655099 | orchestrator | 2026-04-01 01:11:25 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:11:28.701831 | orchestrator | 2026-04-01 01:11:28 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:11:28.703988 | orchestrator | 2026-04-01 01:11:28 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:11:28.704388 | orchestrator | 2026-04-01 01:11:28 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:11:31.744389 | orchestrator | 2026-04-01 01:11:31 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:11:31.746004 | orchestrator | 2026-04-01 01:11:31 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:11:31.746070 | orchestrator | 2026-04-01 01:11:31 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:11:34.790055 | orchestrator | 2026-04-01 01:11:34 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:11:34.791658 | orchestrator | 2026-04-01 01:11:34 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:11:34.791758 | orchestrator | 2026-04-01 01:11:34 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:11:37.839740 | orchestrator | 2026-04-01 01:11:37 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:11:37.840318 | orchestrator | 2026-04-01 01:11:37 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:11:37.840355 | orchestrator | 2026-04-01 01:11:37 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:11:40.886789 | orchestrator | 2026-04-01 01:11:40 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:11:40.888574 | orchestrator | 2026-04-01 01:11:40 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:11:40.888638 | orchestrator | 2026-04-01 01:11:40 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:11:43.937528 | orchestrator | 2026-04-01 01:11:43 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:11:43.939225 | orchestrator | 2026-04-01 01:11:43 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:11:43.939407 | orchestrator | 2026-04-01 01:11:43 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:11:46.989153 | orchestrator | 2026-04-01 01:11:46 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:11:46.991198 | orchestrator | 2026-04-01 01:11:46 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:11:46.991329 | orchestrator | 2026-04-01 01:11:46 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:11:50.041870 | orchestrator | 2026-04-01 01:11:50 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:11:50.043417 | orchestrator | 2026-04-01 01:11:50 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:11:50.043471 | orchestrator | 2026-04-01 01:11:50 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:11:53.090542 | orchestrator | 2026-04-01 01:11:53 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:11:53.091565 | orchestrator | 2026-04-01 01:11:53 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:11:53.091629 | orchestrator | 2026-04-01 01:11:53 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:11:56.143125 | orchestrator | 2026-04-01 01:11:56 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:11:56.144332 | orchestrator | 2026-04-01 01:11:56 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:11:56.144641 | orchestrator | 2026-04-01 01:11:56 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:11:59.187023 | orchestrator | 2026-04-01 01:11:59 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:11:59.188765 | orchestrator | 2026-04-01 01:11:59 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:11:59.189001 | orchestrator | 2026-04-01 01:11:59 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:12:02.235209 | orchestrator | 2026-04-01 01:12:02 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:12:02.237619 | orchestrator | 2026-04-01 01:12:02 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:12:02.237694 | orchestrator | 2026-04-01 01:12:02 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:12:05.280183 | orchestrator | 2026-04-01 01:12:05 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:12:05.282570 | orchestrator | 2026-04-01 01:12:05 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:12:05.282634 | orchestrator | 2026-04-01 01:12:05 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:12:08.333037 | orchestrator | 2026-04-01 01:12:08 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:12:08.334593 | orchestrator | 2026-04-01 01:12:08 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:12:08.334729 | orchestrator | 2026-04-01 01:12:08 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:12:11.388758 | orchestrator | 2026-04-01 01:12:11 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:12:11.390645 | orchestrator | 2026-04-01 01:12:11 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:12:11.390685 | orchestrator | 2026-04-01 01:12:11 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:12:14.435214 | orchestrator | 2026-04-01 01:12:14 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:12:14.438806 | orchestrator | 2026-04-01 01:12:14 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:12:14.438939 | orchestrator | 2026-04-01 01:12:14 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:12:17.494675 | orchestrator | 2026-04-01 01:12:17 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:12:17.495996 | orchestrator | 2026-04-01 01:12:17 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:12:17.496254 | orchestrator | 2026-04-01 01:12:17 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:12:20.543455 | orchestrator | 2026-04-01 01:12:20 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:12:20.547558 | orchestrator | 2026-04-01 01:12:20 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:12:20.547624 | orchestrator | 2026-04-01 01:12:20 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:12:23.594228 | orchestrator | 2026-04-01 01:12:23 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:12:23.595042 | orchestrator | 2026-04-01 01:12:23 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:12:23.595078 | orchestrator | 2026-04-01 01:12:23 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:12:26.644160 | orchestrator | 2026-04-01 01:12:26 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:12:26.646102 | orchestrator | 2026-04-01 01:12:26 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:12:26.646176 | orchestrator | 2026-04-01 01:12:26 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:12:29.692698 | orchestrator | 2026-04-01 01:12:29 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:12:29.694182 | orchestrator | 2026-04-01 01:12:29 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:12:29.694223 | orchestrator | 2026-04-01 01:12:29 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:12:32.737781 | orchestrator | 2026-04-01 01:12:32 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:12:32.739070 | orchestrator | 2026-04-01 01:12:32 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:12:32.739174 | orchestrator | 2026-04-01 01:12:32 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:12:35.787043 | orchestrator | 2026-04-01 01:12:35 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:12:35.789246 | orchestrator | 2026-04-01 01:12:35 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:12:35.789391 | orchestrator | 2026-04-01 01:12:35 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:12:38.834980 | orchestrator | 2026-04-01 01:12:38 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:12:38.836706 | orchestrator | 2026-04-01 01:12:38 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:12:38.840611 | orchestrator | 2026-04-01 01:12:38 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:12:41.891183 | orchestrator | 2026-04-01 01:12:41 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:12:41.894471 | orchestrator | 2026-04-01 01:12:41 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:12:41.894538 | orchestrator | 2026-04-01 01:12:41 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:12:44.939688 | orchestrator | 2026-04-01 01:12:44 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:12:44.941336 | orchestrator | 2026-04-01 01:12:44 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:12:44.941411 | orchestrator | 2026-04-01 01:12:44 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:12:47.989052 | orchestrator | 2026-04-01 01:12:47 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:12:47.990568 | orchestrator | 2026-04-01 01:12:47 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:12:47.990621 | orchestrator | 2026-04-01 01:12:47 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:12:51.039607 | orchestrator | 2026-04-01 01:12:51 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:12:51.040959 | orchestrator | 2026-04-01 01:12:51 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:12:51.041018 | orchestrator | 2026-04-01 01:12:51 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:12:54.087170 | orchestrator | 2026-04-01 01:12:54 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:12:54.087947 | orchestrator | 2026-04-01 01:12:54 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:12:54.087992 | orchestrator | 2026-04-01 01:12:54 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:12:57.136379 | orchestrator | 2026-04-01 01:12:57 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:12:57.138793 | orchestrator | 2026-04-01 01:12:57 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:12:57.138866 | orchestrator | 2026-04-01 01:12:57 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:13:00.180821 | orchestrator | 2026-04-01 01:13:00 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:13:00.181820 | orchestrator | 2026-04-01 01:13:00 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:13:00.181873 | orchestrator | 2026-04-01 01:13:00 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:13:03.229405 | orchestrator | 2026-04-01 01:13:03 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:13:03.231702 | orchestrator | 2026-04-01 01:13:03 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:13:03.231760 | orchestrator | 2026-04-01 01:13:03 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:13:06.279854 | orchestrator | 2026-04-01 01:13:06 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:13:06.280934 | orchestrator | 2026-04-01 01:13:06 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:13:06.280975 | orchestrator | 2026-04-01 01:13:06 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:13:09.330425 | orchestrator | 2026-04-01 01:13:09 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:13:09.332379 | orchestrator | 2026-04-01 01:13:09 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:13:09.332452 | orchestrator | 2026-04-01 01:13:09 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:13:12.375442 | orchestrator | 2026-04-01 01:13:12 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:13:12.376909 | orchestrator | 2026-04-01 01:13:12 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:13:12.377386 | orchestrator | 2026-04-01 01:13:12 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:13:15.422331 | orchestrator | 2026-04-01 01:13:15 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:13:15.423366 | orchestrator | 2026-04-01 01:13:15 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:13:15.423629 | orchestrator | 2026-04-01 01:13:15 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:13:18.471127 | orchestrator | 2026-04-01 01:13:18 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:13:18.473536 | orchestrator | 2026-04-01 01:13:18 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:13:18.473656 | orchestrator | 2026-04-01 01:13:18 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:13:21.522497 | orchestrator | 2026-04-01 01:13:21 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:13:21.525219 | orchestrator | 2026-04-01 01:13:21 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:13:21.525325 | orchestrator | 2026-04-01 01:13:21 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:13:24.571389 | orchestrator | 2026-04-01 01:13:24 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:13:24.573384 | orchestrator | 2026-04-01 01:13:24 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:13:24.573511 | orchestrator | 2026-04-01 01:13:24 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:13:27.620524 | orchestrator | 2026-04-01 01:13:27 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:13:27.622560 | orchestrator | 2026-04-01 01:13:27 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:13:27.622611 | orchestrator | 2026-04-01 01:13:27 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:13:30.667833 | orchestrator | 2026-04-01 01:13:30 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:13:30.670099 | orchestrator | 2026-04-01 01:13:30 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:13:30.670175 | orchestrator | 2026-04-01 01:13:30 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:13:33.719449 | orchestrator | 2026-04-01 01:13:33 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:13:33.720620 | orchestrator | 2026-04-01 01:13:33 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:13:33.720670 | orchestrator | 2026-04-01 01:13:33 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:13:36.765803 | orchestrator | 2026-04-01 01:13:36 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:13:36.767539 | orchestrator | 2026-04-01 01:13:36 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:13:36.767605 | orchestrator | 2026-04-01 01:13:36 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:13:39.809231 | orchestrator | 2026-04-01 01:13:39 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:13:39.810142 | orchestrator | 2026-04-01 01:13:39 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:13:39.810399 | orchestrator | 2026-04-01 01:13:39 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:13:42.858570 | orchestrator | 2026-04-01 01:13:42 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:13:42.859691 | orchestrator | 2026-04-01 01:13:42 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:13:42.859708 | orchestrator | 2026-04-01 01:13:42 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:13:45.904746 | orchestrator | 2026-04-01 01:13:45 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:13:45.906944 | orchestrator | 2026-04-01 01:13:45 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:13:45.907022 | orchestrator | 2026-04-01 01:13:45 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:13:48.951624 | orchestrator | 2026-04-01 01:13:48 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:13:48.952581 | orchestrator | 2026-04-01 01:13:48 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:13:48.952634 | orchestrator | 2026-04-01 01:13:48 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:13:51.995011 | orchestrator | 2026-04-01 01:13:51 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:13:51.996692 | orchestrator | 2026-04-01 01:13:51 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:13:51.996730 | orchestrator | 2026-04-01 01:13:51 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:13:55.043000 | orchestrator | 2026-04-01 01:13:55 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:13:55.045852 | orchestrator | 2026-04-01 01:13:55 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:13:55.045903 | orchestrator | 2026-04-01 01:13:55 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:13:58.093675 | orchestrator | 2026-04-01 01:13:58 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:13:58.095829 | orchestrator | 2026-04-01 01:13:58 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:13:58.095876 | orchestrator | 2026-04-01 01:13:58 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:14:01.142642 | orchestrator | 2026-04-01 01:14:01 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:14:01.144513 | orchestrator | 2026-04-01 01:14:01 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:14:01.144582 | orchestrator | 2026-04-01 01:14:01 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:14:04.185071 | orchestrator | 2026-04-01 01:14:04 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:14:04.187788 | orchestrator | 2026-04-01 01:14:04 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:14:04.189154 | orchestrator | 2026-04-01 01:14:04 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:14:07.235384 | orchestrator | 2026-04-01 01:14:07 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:14:07.237665 | orchestrator | 2026-04-01 01:14:07 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:14:07.237847 | orchestrator | 2026-04-01 01:14:07 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:14:10.277223 | orchestrator | 2026-04-01 01:14:10 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:14:10.279423 | orchestrator | 2026-04-01 01:14:10 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:14:10.279540 | orchestrator | 2026-04-01 01:14:10 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:14:13.327100 | orchestrator | 2026-04-01 01:14:13 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:14:13.330674 | orchestrator | 2026-04-01 01:14:13 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:14:13.330741 | orchestrator | 2026-04-01 01:14:13 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:14:16.386665 | orchestrator | 2026-04-01 01:14:16 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:14:16.388594 | orchestrator | 2026-04-01 01:14:16 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:14:16.388694 | orchestrator | 2026-04-01 01:14:16 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:14:19.432516 | orchestrator | 2026-04-01 01:14:19 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:14:19.435102 | orchestrator | 2026-04-01 01:14:19 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:14:19.435204 | orchestrator | 2026-04-01 01:14:19 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:14:22.478552 | orchestrator | 2026-04-01 01:14:22 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:14:22.480247 | orchestrator | 2026-04-01 01:14:22 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:14:22.480403 | orchestrator | 2026-04-01 01:14:22 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:14:25.529067 | orchestrator | 2026-04-01 01:14:25 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:14:25.530501 | orchestrator | 2026-04-01 01:14:25 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:14:25.530559 | orchestrator | 2026-04-01 01:14:25 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:14:28.579828 | orchestrator | 2026-04-01 01:14:28 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:14:28.582524 | orchestrator | 2026-04-01 01:14:28 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:14:28.582599 | orchestrator | 2026-04-01 01:14:28 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:14:31.628532 | orchestrator | 2026-04-01 01:14:31 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:14:31.630503 | orchestrator | 2026-04-01 01:14:31 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:14:31.630561 | orchestrator | 2026-04-01 01:14:31 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:14:34.678209 | orchestrator | 2026-04-01 01:14:34 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:14:34.679606 | orchestrator | 2026-04-01 01:14:34 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:14:34.679652 | orchestrator | 2026-04-01 01:14:34 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:14:37.728132 | orchestrator | 2026-04-01 01:14:37 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:14:37.730897 | orchestrator | 2026-04-01 01:14:37 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:14:37.730943 | orchestrator | 2026-04-01 01:14:37 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:14:40.781477 | orchestrator | 2026-04-01 01:14:40 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:14:40.783840 | orchestrator | 2026-04-01 01:14:40 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:14:40.784424 | orchestrator | 2026-04-01 01:14:40 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:14:43.832042 | orchestrator | 2026-04-01 01:14:43 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:14:43.832926 | orchestrator | 2026-04-01 01:14:43 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:14:43.832942 | orchestrator | 2026-04-01 01:14:43 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:14:46.878493 | orchestrator | 2026-04-01 01:14:46 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:14:46.879615 | orchestrator | 2026-04-01 01:14:46 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:14:46.879827 | orchestrator | 2026-04-01 01:14:46 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:14:49.928035 | orchestrator | 2026-04-01 01:14:49 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:14:49.930163 | orchestrator | 2026-04-01 01:14:49 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:14:49.930357 | orchestrator | 2026-04-01 01:14:49 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:14:52.976008 | orchestrator | 2026-04-01 01:14:52 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:14:52.977758 | orchestrator | 2026-04-01 01:14:52 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:14:52.977821 | orchestrator | 2026-04-01 01:14:52 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:14:56.023037 | orchestrator | 2026-04-01 01:14:56 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:14:56.024168 | orchestrator | 2026-04-01 01:14:56 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:14:56.024205 | orchestrator | 2026-04-01 01:14:56 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:14:59.066892 | orchestrator | 2026-04-01 01:14:59 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:14:59.068563 | orchestrator | 2026-04-01 01:14:59 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:14:59.068632 | orchestrator | 2026-04-01 01:14:59 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:15:02.109158 | orchestrator | 2026-04-01 01:15:02 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:15:02.110267 | orchestrator | 2026-04-01 01:15:02 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:15:02.110399 | orchestrator | 2026-04-01 01:15:02 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:15:05.154707 | orchestrator | 2026-04-01 01:15:05 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:15:05.156049 | orchestrator | 2026-04-01 01:15:05 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:15:05.156094 | orchestrator | 2026-04-01 01:15:05 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:15:08.202283 | orchestrator | 2026-04-01 01:15:08 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:15:08.205458 | orchestrator | 2026-04-01 01:15:08 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:15:08.205510 | orchestrator | 2026-04-01 01:15:08 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:15:11.250401 | orchestrator | 2026-04-01 01:15:11 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:15:11.251678 | orchestrator | 2026-04-01 01:15:11 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:15:11.251819 | orchestrator | 2026-04-01 01:15:11 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:15:14.300542 | orchestrator | 2026-04-01 01:15:14 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:15:14.302616 | orchestrator | 2026-04-01 01:15:14 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:15:14.302665 | orchestrator | 2026-04-01 01:15:14 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:15:17.349871 | orchestrator | 2026-04-01 01:15:17 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:15:17.351702 | orchestrator | 2026-04-01 01:15:17 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:15:17.352239 | orchestrator | 2026-04-01 01:15:17 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:15:20.406662 | orchestrator | 2026-04-01 01:15:20 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:15:20.408835 | orchestrator | 2026-04-01 01:15:20 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:15:20.408898 | orchestrator | 2026-04-01 01:15:20 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:15:23.455833 | orchestrator | 2026-04-01 01:15:23 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:15:23.457162 | orchestrator | 2026-04-01 01:15:23 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:15:23.457276 | orchestrator | 2026-04-01 01:15:23 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:15:26.499782 | orchestrator | 2026-04-01 01:15:26 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:15:26.501409 | orchestrator | 2026-04-01 01:15:26 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:15:26.501466 | orchestrator | 2026-04-01 01:15:26 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:15:29.546044 | orchestrator | 2026-04-01 01:15:29 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:15:29.548917 | orchestrator | 2026-04-01 01:15:29 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:15:29.549097 | orchestrator | 2026-04-01 01:15:29 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:15:32.596721 | orchestrator | 2026-04-01 01:15:32 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:15:32.599237 | orchestrator | 2026-04-01 01:15:32 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:15:32.599321 | orchestrator | 2026-04-01 01:15:32 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:15:35.643109 | orchestrator | 2026-04-01 01:15:35 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:15:35.645319 | orchestrator | 2026-04-01 01:15:35 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:15:35.645551 | orchestrator | 2026-04-01 01:15:35 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:15:38.691253 | orchestrator | 2026-04-01 01:15:38 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:15:38.693279 | orchestrator | 2026-04-01 01:15:38 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:15:38.693513 | orchestrator | 2026-04-01 01:15:38 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:15:41.739481 | orchestrator | 2026-04-01 01:15:41 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:15:41.741108 | orchestrator | 2026-04-01 01:15:41 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:15:41.741147 | orchestrator | 2026-04-01 01:15:41 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:15:44.783209 | orchestrator | 2026-04-01 01:15:44 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:15:44.785644 | orchestrator | 2026-04-01 01:15:44 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:15:44.786134 | orchestrator | 2026-04-01 01:15:44 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:15:47.834729 | orchestrator | 2026-04-01 01:15:47 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:15:47.836263 | orchestrator | 2026-04-01 01:15:47 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:15:47.836496 | orchestrator | 2026-04-01 01:15:47 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:15:50.876570 | orchestrator | 2026-04-01 01:15:50 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:15:50.877458 | orchestrator | 2026-04-01 01:15:50 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:15:50.877509 | orchestrator | 2026-04-01 01:15:50 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:15:53.921225 | orchestrator | 2026-04-01 01:15:53 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:15:53.923534 | orchestrator | 2026-04-01 01:15:53 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:15:53.923593 | orchestrator | 2026-04-01 01:15:53 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:15:56.964437 | orchestrator | 2026-04-01 01:15:56 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:15:56.966562 | orchestrator | 2026-04-01 01:15:56 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:15:56.966759 | orchestrator | 2026-04-01 01:15:56 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:16:00.016926 | orchestrator | 2026-04-01 01:16:00 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:16:00.019550 | orchestrator | 2026-04-01 01:16:00 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:16:00.019642 | orchestrator | 2026-04-01 01:16:00 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:16:03.066916 | orchestrator | 2026-04-01 01:16:03 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:16:03.068446 | orchestrator | 2026-04-01 01:16:03 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:16:03.068697 | orchestrator | 2026-04-01 01:16:03 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:16:06.118890 | orchestrator | 2026-04-01 01:16:06 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:16:06.120668 | orchestrator | 2026-04-01 01:16:06 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:16:06.120769 | orchestrator | 2026-04-01 01:16:06 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:16:09.168621 | orchestrator | 2026-04-01 01:16:09 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:16:09.170436 | orchestrator | 2026-04-01 01:16:09 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:16:09.170525 | orchestrator | 2026-04-01 01:16:09 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:16:12.212509 | orchestrator | 2026-04-01 01:16:12 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:16:12.214076 | orchestrator | 2026-04-01 01:16:12 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:16:12.214131 | orchestrator | 2026-04-01 01:16:12 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:16:15.263701 | orchestrator | 2026-04-01 01:16:15 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:16:15.265405 | orchestrator | 2026-04-01 01:16:15 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:16:15.265481 | orchestrator | 2026-04-01 01:16:15 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:16:18.314286 | orchestrator | 2026-04-01 01:16:18 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:16:18.315205 | orchestrator | 2026-04-01 01:16:18 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:16:18.315267 | orchestrator | 2026-04-01 01:16:18 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:16:21.367523 | orchestrator | 2026-04-01 01:16:21 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:16:21.369227 | orchestrator | 2026-04-01 01:16:21 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:16:21.369299 | orchestrator | 2026-04-01 01:16:21 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:16:24.409165 | orchestrator | 2026-04-01 01:16:24 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:16:24.415407 | orchestrator | 2026-04-01 01:16:24 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:16:24.415490 | orchestrator | 2026-04-01 01:16:24 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:16:27.459823 | orchestrator | 2026-04-01 01:16:27 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:16:27.461818 | orchestrator | 2026-04-01 01:16:27 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:16:27.462186 | orchestrator | 2026-04-01 01:16:27 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:16:30.508779 | orchestrator | 2026-04-01 01:16:30 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:16:30.510996 | orchestrator | 2026-04-01 01:16:30 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:16:30.511042 | orchestrator | 2026-04-01 01:16:30 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:16:33.560503 | orchestrator | 2026-04-01 01:16:33 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:16:33.562088 | orchestrator | 2026-04-01 01:16:33 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:16:33.562133 | orchestrator | 2026-04-01 01:16:33 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:16:36.605292 | orchestrator | 2026-04-01 01:16:36 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:16:36.606960 | orchestrator | 2026-04-01 01:16:36 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:16:36.607013 | orchestrator | 2026-04-01 01:16:36 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:16:39.651964 | orchestrator | 2026-04-01 01:16:39 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:16:39.653805 | orchestrator | 2026-04-01 01:16:39 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:16:39.653868 | orchestrator | 2026-04-01 01:16:39 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:16:42.700838 | orchestrator | 2026-04-01 01:16:42 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:16:42.702432 | orchestrator | 2026-04-01 01:16:42 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:16:42.702550 | orchestrator | 2026-04-01 01:16:42 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:16:45.749272 | orchestrator | 2026-04-01 01:16:45 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:16:45.750605 | orchestrator | 2026-04-01 01:16:45 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:16:45.750685 | orchestrator | 2026-04-01 01:16:45 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:16:48.802385 | orchestrator | 2026-04-01 01:16:48 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:16:48.804028 | orchestrator | 2026-04-01 01:16:48 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:16:48.804109 | orchestrator | 2026-04-01 01:16:48 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:16:51.847480 | orchestrator | 2026-04-01 01:16:51 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:16:51.848828 | orchestrator | 2026-04-01 01:16:51 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:16:51.849133 | orchestrator | 2026-04-01 01:16:51 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:16:54.892690 | orchestrator | 2026-04-01 01:16:54 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:16:54.893555 | orchestrator | 2026-04-01 01:16:54 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:16:54.893585 | orchestrator | 2026-04-01 01:16:54 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:16:57.938083 | orchestrator | 2026-04-01 01:16:57 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:16:57.939917 | orchestrator | 2026-04-01 01:16:57 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:16:57.939964 | orchestrator | 2026-04-01 01:16:57 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:17:00.980064 | orchestrator | 2026-04-01 01:17:00 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:17:00.982065 | orchestrator | 2026-04-01 01:17:00 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:17:00.982120 | orchestrator | 2026-04-01 01:17:00 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:17:04.032820 | orchestrator | 2026-04-01 01:17:04 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:17:04.034431 | orchestrator | 2026-04-01 01:17:04 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:17:04.034483 | orchestrator | 2026-04-01 01:17:04 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:17:07.086233 | orchestrator | 2026-04-01 01:17:07 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:17:07.088499 | orchestrator | 2026-04-01 01:17:07 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:17:07.088536 | orchestrator | 2026-04-01 01:17:07 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:17:10.133881 | orchestrator | 2026-04-01 01:17:10 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:17:10.135720 | orchestrator | 2026-04-01 01:17:10 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:17:10.135790 | orchestrator | 2026-04-01 01:17:10 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:17:13.181233 | orchestrator | 2026-04-01 01:17:13 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:17:13.182571 | orchestrator | 2026-04-01 01:17:13 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:17:13.182623 | orchestrator | 2026-04-01 01:17:13 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:17:16.235943 | orchestrator | 2026-04-01 01:17:16 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:17:16.237883 | orchestrator | 2026-04-01 01:17:16 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:17:16.237964 | orchestrator | 2026-04-01 01:17:16 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:17:19.293951 | orchestrator | 2026-04-01 01:17:19 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:17:19.297632 | orchestrator | 2026-04-01 01:17:19 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:17:19.297757 | orchestrator | 2026-04-01 01:17:19 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:17:22.347046 | orchestrator | 2026-04-01 01:17:22 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:17:22.349407 | orchestrator | 2026-04-01 01:17:22 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:17:22.349474 | orchestrator | 2026-04-01 01:17:22 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:17:25.403522 | orchestrator | 2026-04-01 01:17:25 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:17:25.405042 | orchestrator | 2026-04-01 01:17:25 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:17:25.405101 | orchestrator | 2026-04-01 01:17:25 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:17:28.449537 | orchestrator | 2026-04-01 01:17:28 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:17:28.451908 | orchestrator | 2026-04-01 01:17:28 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:17:28.451966 | orchestrator | 2026-04-01 01:17:28 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:17:31.492739 | orchestrator | 2026-04-01 01:17:31 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:17:31.492886 | orchestrator | 2026-04-01 01:17:31 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:17:31.492906 | orchestrator | 2026-04-01 01:17:31 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:17:34.544503 | orchestrator | 2026-04-01 01:17:34 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:17:34.546272 | orchestrator | 2026-04-01 01:17:34 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:17:34.546558 | orchestrator | 2026-04-01 01:17:34 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:17:37.589453 | orchestrator | 2026-04-01 01:17:37 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:17:37.591161 | orchestrator | 2026-04-01 01:17:37 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:17:37.591210 | orchestrator | 2026-04-01 01:17:37 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:17:40.636794 | orchestrator | 2026-04-01 01:17:40 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:17:40.638760 | orchestrator | 2026-04-01 01:17:40 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:17:40.638966 | orchestrator | 2026-04-01 01:17:40 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:17:43.684250 | orchestrator | 2026-04-01 01:17:43 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:17:43.686069 | orchestrator | 2026-04-01 01:17:43 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:17:43.686170 | orchestrator | 2026-04-01 01:17:43 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:17:46.732829 | orchestrator | 2026-04-01 01:17:46 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:17:46.735171 | orchestrator | 2026-04-01 01:17:46 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:17:46.735262 | orchestrator | 2026-04-01 01:17:46 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:17:49.787818 | orchestrator | 2026-04-01 01:17:49 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:17:49.789519 | orchestrator | 2026-04-01 01:17:49 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:17:49.789583 | orchestrator | 2026-04-01 01:17:49 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:17:52.837580 | orchestrator | 2026-04-01 01:17:52 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:17:52.839005 | orchestrator | 2026-04-01 01:17:52 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:17:52.839091 | orchestrator | 2026-04-01 01:17:52 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:17:55.887320 | orchestrator | 2026-04-01 01:17:55 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:17:55.889253 | orchestrator | 2026-04-01 01:17:55 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:17:55.889392 | orchestrator | 2026-04-01 01:17:55 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:17:58.935671 | orchestrator | 2026-04-01 01:17:58 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:17:58.937408 | orchestrator | 2026-04-01 01:17:58 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:17:58.937546 | orchestrator | 2026-04-01 01:17:58 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:18:01.980700 | orchestrator | 2026-04-01 01:18:01 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:18:01.982458 | orchestrator | 2026-04-01 01:18:01 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:18:01.983714 | orchestrator | 2026-04-01 01:18:01 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:18:05.035772 | orchestrator | 2026-04-01 01:18:05 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:18:05.037712 | orchestrator | 2026-04-01 01:18:05 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:18:05.038317 | orchestrator | 2026-04-01 01:18:05 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:18:08.083797 | orchestrator | 2026-04-01 01:18:08 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:18:08.084985 | orchestrator | 2026-04-01 01:18:08 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:18:08.085047 | orchestrator | 2026-04-01 01:18:08 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:18:11.131007 | orchestrator | 2026-04-01 01:18:11 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:18:11.133522 | orchestrator | 2026-04-01 01:18:11 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:18:11.133611 | orchestrator | 2026-04-01 01:18:11 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:18:14.179327 | orchestrator | 2026-04-01 01:18:14 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:18:14.180805 | orchestrator | 2026-04-01 01:18:14 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:18:14.180848 | orchestrator | 2026-04-01 01:18:14 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:18:17.227912 | orchestrator | 2026-04-01 01:18:17 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:18:17.229164 | orchestrator | 2026-04-01 01:18:17 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:18:17.229244 | orchestrator | 2026-04-01 01:18:17 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:18:20.270671 | orchestrator | 2026-04-01 01:18:20 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:18:20.271579 | orchestrator | 2026-04-01 01:18:20 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:18:20.271611 | orchestrator | 2026-04-01 01:18:20 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:18:23.316501 | orchestrator | 2026-04-01 01:18:23 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:18:23.318261 | orchestrator | 2026-04-01 01:18:23 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:18:23.318313 | orchestrator | 2026-04-01 01:18:23 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:18:26.364343 | orchestrator | 2026-04-01 01:18:26 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:18:26.366137 | orchestrator | 2026-04-01 01:18:26 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:18:26.366215 | orchestrator | 2026-04-01 01:18:26 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:18:29.409556 | orchestrator | 2026-04-01 01:18:29 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:18:29.411075 | orchestrator | 2026-04-01 01:18:29 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:18:29.411127 | orchestrator | 2026-04-01 01:18:29 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:18:32.457854 | orchestrator | 2026-04-01 01:18:32 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:18:32.459640 | orchestrator | 2026-04-01 01:18:32 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:18:32.459683 | orchestrator | 2026-04-01 01:18:32 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:18:35.509163 | orchestrator | 2026-04-01 01:18:35 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:18:35.510579 | orchestrator | 2026-04-01 01:18:35 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:18:35.510630 | orchestrator | 2026-04-01 01:18:35 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:18:38.553947 | orchestrator | 2026-04-01 01:18:38 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:18:38.555920 | orchestrator | 2026-04-01 01:18:38 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:18:38.555990 | orchestrator | 2026-04-01 01:18:38 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:18:41.612860 | orchestrator | 2026-04-01 01:18:41 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:18:41.615150 | orchestrator | 2026-04-01 01:18:41 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:18:41.615206 | orchestrator | 2026-04-01 01:18:41 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:18:44.669729 | orchestrator | 2026-04-01 01:18:44 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:18:44.670624 | orchestrator | 2026-04-01 01:18:44 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:18:44.670684 | orchestrator | 2026-04-01 01:18:44 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:18:47.718584 | orchestrator | 2026-04-01 01:18:47 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:18:47.720955 | orchestrator | 2026-04-01 01:18:47 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:18:47.721002 | orchestrator | 2026-04-01 01:18:47 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:18:50.770794 | orchestrator | 2026-04-01 01:18:50 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:18:50.772300 | orchestrator | 2026-04-01 01:18:50 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:18:50.772395 | orchestrator | 2026-04-01 01:18:50 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:18:53.821049 | orchestrator | 2026-04-01 01:18:53 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:18:53.823211 | orchestrator | 2026-04-01 01:18:53 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:18:53.823410 | orchestrator | 2026-04-01 01:18:53 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:18:56.871082 | orchestrator | 2026-04-01 01:18:56 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:18:56.872968 | orchestrator | 2026-04-01 01:18:56 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:18:56.873034 | orchestrator | 2026-04-01 01:18:56 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:18:59.918947 | orchestrator | 2026-04-01 01:18:59 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:18:59.920472 | orchestrator | 2026-04-01 01:18:59 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:18:59.920595 | orchestrator | 2026-04-01 01:18:59 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:19:02.965020 | orchestrator | 2026-04-01 01:19:02 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:19:02.966620 | orchestrator | 2026-04-01 01:19:02 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:19:02.966685 | orchestrator | 2026-04-01 01:19:02 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:19:06.018295 | orchestrator | 2026-04-01 01:19:06 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:19:06.019064 | orchestrator | 2026-04-01 01:19:06 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:19:06.019531 | orchestrator | 2026-04-01 01:19:06 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:19:09.056926 | orchestrator | 2026-04-01 01:19:09 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:19:09.058305 | orchestrator | 2026-04-01 01:19:09 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:19:09.058434 | orchestrator | 2026-04-01 01:19:09 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:19:12.104823 | orchestrator | 2026-04-01 01:19:12 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:19:12.107216 | orchestrator | 2026-04-01 01:19:12 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:19:12.107362 | orchestrator | 2026-04-01 01:19:12 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:19:15.159943 | orchestrator | 2026-04-01 01:19:15 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:19:15.161357 | orchestrator | 2026-04-01 01:19:15 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:19:15.162553 | orchestrator | 2026-04-01 01:19:15 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:19:18.207135 | orchestrator | 2026-04-01 01:19:18 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:19:18.210671 | orchestrator | 2026-04-01 01:19:18 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:19:18.210753 | orchestrator | 2026-04-01 01:19:18 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:19:21.262404 | orchestrator | 2026-04-01 01:19:21 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:19:21.265229 | orchestrator | 2026-04-01 01:19:21 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:19:21.265291 | orchestrator | 2026-04-01 01:19:21 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:19:24.312107 | orchestrator | 2026-04-01 01:19:24 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:19:24.314114 | orchestrator | 2026-04-01 01:19:24 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:19:24.314190 | orchestrator | 2026-04-01 01:19:24 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:19:27.366403 | orchestrator | 2026-04-01 01:19:27 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:19:27.367855 | orchestrator | 2026-04-01 01:19:27 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:19:27.367901 | orchestrator | 2026-04-01 01:19:27 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:19:30.409312 | orchestrator | 2026-04-01 01:19:30 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:19:30.410557 | orchestrator | 2026-04-01 01:19:30 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:19:30.410619 | orchestrator | 2026-04-01 01:19:30 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:19:33.457247 | orchestrator | 2026-04-01 01:19:33 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:19:33.458818 | orchestrator | 2026-04-01 01:19:33 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:19:33.458878 | orchestrator | 2026-04-01 01:19:33 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:19:36.511619 | orchestrator | 2026-04-01 01:19:36 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:19:36.513080 | orchestrator | 2026-04-01 01:19:36 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:19:36.513144 | orchestrator | 2026-04-01 01:19:36 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:19:39.553844 | orchestrator | 2026-04-01 01:19:39 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:19:39.555646 | orchestrator | 2026-04-01 01:19:39 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:19:39.555699 | orchestrator | 2026-04-01 01:19:39 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:19:42.601966 | orchestrator | 2026-04-01 01:19:42 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:19:42.604179 | orchestrator | 2026-04-01 01:19:42 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:19:42.604265 | orchestrator | 2026-04-01 01:19:42 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:19:45.649590 | orchestrator | 2026-04-01 01:19:45 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:19:45.650443 | orchestrator | 2026-04-01 01:19:45 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:19:45.650497 | orchestrator | 2026-04-01 01:19:45 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:19:48.699465 | orchestrator | 2026-04-01 01:19:48 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:19:48.701464 | orchestrator | 2026-04-01 01:19:48 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:19:48.701527 | orchestrator | 2026-04-01 01:19:48 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:19:51.753403 | orchestrator | 2026-04-01 01:19:51 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:19:51.755470 | orchestrator | 2026-04-01 01:19:51 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:19:51.755529 | orchestrator | 2026-04-01 01:19:51 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:19:54.806755 | orchestrator | 2026-04-01 01:19:54 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:19:54.808221 | orchestrator | 2026-04-01 01:19:54 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:19:54.808283 | orchestrator | 2026-04-01 01:19:54 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:19:57.852441 | orchestrator | 2026-04-01 01:19:57 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:19:57.853914 | orchestrator | 2026-04-01 01:19:57 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:19:57.853968 | orchestrator | 2026-04-01 01:19:57 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:20:00.898949 | orchestrator | 2026-04-01 01:20:00 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:20:00.900777 | orchestrator | 2026-04-01 01:20:00 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:20:00.901065 | orchestrator | 2026-04-01 01:20:00 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:20:03.946940 | orchestrator | 2026-04-01 01:20:03 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:20:03.948737 | orchestrator | 2026-04-01 01:20:03 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:20:03.948807 | orchestrator | 2026-04-01 01:20:03 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:20:06.993285 | orchestrator | 2026-04-01 01:20:06 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:20:06.994401 | orchestrator | 2026-04-01 01:20:06 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:20:06.994829 | orchestrator | 2026-04-01 01:20:06 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:20:10.034991 | orchestrator | 2026-04-01 01:20:10 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:20:10.036590 | orchestrator | 2026-04-01 01:20:10 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:20:10.036690 | orchestrator | 2026-04-01 01:20:10 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:20:13.082071 | orchestrator | 2026-04-01 01:20:13 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:20:13.084176 | orchestrator | 2026-04-01 01:20:13 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:20:13.084284 | orchestrator | 2026-04-01 01:20:13 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:20:16.131705 | orchestrator | 2026-04-01 01:20:16 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:20:16.133101 | orchestrator | 2026-04-01 01:20:16 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:20:16.133186 | orchestrator | 2026-04-01 01:20:16 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:20:19.176517 | orchestrator | 2026-04-01 01:20:19 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:20:19.178181 | orchestrator | 2026-04-01 01:20:19 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:20:19.178287 | orchestrator | 2026-04-01 01:20:19 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:20:22.235259 | orchestrator | 2026-04-01 01:20:22 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:20:22.236501 | orchestrator | 2026-04-01 01:20:22 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:20:22.236553 | orchestrator | 2026-04-01 01:20:22 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:20:25.284489 | orchestrator | 2026-04-01 01:20:25 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:20:25.285446 | orchestrator | 2026-04-01 01:20:25 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:20:25.285492 | orchestrator | 2026-04-01 01:20:25 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:20:28.331401 | orchestrator | 2026-04-01 01:20:28 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:20:28.334594 | orchestrator | 2026-04-01 01:20:28 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:20:28.334704 | orchestrator | 2026-04-01 01:20:28 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:20:31.380209 | orchestrator | 2026-04-01 01:20:31 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:20:31.382225 | orchestrator | 2026-04-01 01:20:31 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:20:31.382327 | orchestrator | 2026-04-01 01:20:31 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:20:34.433473 | orchestrator | 2026-04-01 01:20:34 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:20:34.434995 | orchestrator | 2026-04-01 01:20:34 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:20:34.435041 | orchestrator | 2026-04-01 01:20:34 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:20:37.481975 | orchestrator | 2026-04-01 01:20:37 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:20:37.483053 | orchestrator | 2026-04-01 01:20:37 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:20:37.483091 | orchestrator | 2026-04-01 01:20:37 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:20:40.533917 | orchestrator | 2026-04-01 01:20:40 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:20:40.536182 | orchestrator | 2026-04-01 01:20:40 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:20:40.536279 | orchestrator | 2026-04-01 01:20:40 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:20:43.584190 | orchestrator | 2026-04-01 01:20:43 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:20:43.586113 | orchestrator | 2026-04-01 01:20:43 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:20:43.586277 | orchestrator | 2026-04-01 01:20:43 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:20:46.633430 | orchestrator | 2026-04-01 01:20:46 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:20:46.635374 | orchestrator | 2026-04-01 01:20:46 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:20:46.635452 | orchestrator | 2026-04-01 01:20:46 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:20:49.685941 | orchestrator | 2026-04-01 01:20:49 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:20:49.688229 | orchestrator | 2026-04-01 01:20:49 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:20:49.688356 | orchestrator | 2026-04-01 01:20:49 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:20:52.739254 | orchestrator | 2026-04-01 01:20:52 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:20:52.740920 | orchestrator | 2026-04-01 01:20:52 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:20:52.740956 | orchestrator | 2026-04-01 01:20:52 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:20:55.794527 | orchestrator | 2026-04-01 01:20:55 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:20:55.796432 | orchestrator | 2026-04-01 01:20:55 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:20:55.796479 | orchestrator | 2026-04-01 01:20:55 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:20:58.846276 | orchestrator | 2026-04-01 01:20:58 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:20:58.847826 | orchestrator | 2026-04-01 01:20:58 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:20:58.847895 | orchestrator | 2026-04-01 01:20:58 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:21:01.893390 | orchestrator | 2026-04-01 01:21:01 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:21:01.894883 | orchestrator | 2026-04-01 01:21:01 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:21:01.894934 | orchestrator | 2026-04-01 01:21:01 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:21:04.946096 | orchestrator | 2026-04-01 01:21:04 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:21:04.947683 | orchestrator | 2026-04-01 01:21:04 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:21:04.947758 | orchestrator | 2026-04-01 01:21:04 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:21:07.989883 | orchestrator | 2026-04-01 01:21:07 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:21:07.991899 | orchestrator | 2026-04-01 01:21:07 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:21:07.991973 | orchestrator | 2026-04-01 01:21:07 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:21:11.042432 | orchestrator | 2026-04-01 01:21:11 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:21:11.044251 | orchestrator | 2026-04-01 01:21:11 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:21:11.044357 | orchestrator | 2026-04-01 01:21:11 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:21:14.090773 | orchestrator | 2026-04-01 01:21:14 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:21:14.091911 | orchestrator | 2026-04-01 01:21:14 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:21:14.092009 | orchestrator | 2026-04-01 01:21:14 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:21:17.133352 | orchestrator | 2026-04-01 01:21:17 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:21:17.135116 | orchestrator | 2026-04-01 01:21:17 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:21:17.135179 | orchestrator | 2026-04-01 01:21:17 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:21:20.181765 | orchestrator | 2026-04-01 01:21:20 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:21:20.184858 | orchestrator | 2026-04-01 01:21:20 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:21:20.184950 | orchestrator | 2026-04-01 01:21:20 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:21:23.230581 | orchestrator | 2026-04-01 01:21:23 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:21:23.232739 | orchestrator | 2026-04-01 01:21:23 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:21:23.233016 | orchestrator | 2026-04-01 01:21:23 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:21:26.278643 | orchestrator | 2026-04-01 01:21:26 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:21:26.280032 | orchestrator | 2026-04-01 01:21:26 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:21:26.280073 | orchestrator | 2026-04-01 01:21:26 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:21:29.325048 | orchestrator | 2026-04-01 01:21:29 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:21:29.326327 | orchestrator | 2026-04-01 01:21:29 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:21:29.326562 | orchestrator | 2026-04-01 01:21:29 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:21:32.368594 | orchestrator | 2026-04-01 01:21:32 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:23:32.481832 | orchestrator | 2026-04-01 01:23:32 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:23:32.481969 | orchestrator | 2026-04-01 01:23:32 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:23:35.526943 | orchestrator | 2026-04-01 01:23:35 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:23:35.528221 | orchestrator | 2026-04-01 01:23:35 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:23:35.528459 | orchestrator | 2026-04-01 01:23:35 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:23:38.579933 | orchestrator | 2026-04-01 01:23:38 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:23:38.582515 | orchestrator | 2026-04-01 01:23:38 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:23:38.582594 | orchestrator | 2026-04-01 01:23:38 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:23:41.637152 | orchestrator | 2026-04-01 01:23:41 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:23:41.639123 | orchestrator | 2026-04-01 01:23:41 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:23:41.639178 | orchestrator | 2026-04-01 01:23:41 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:23:44.690105 | orchestrator | 2026-04-01 01:23:44 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:23:44.691424 | orchestrator | 2026-04-01 01:23:44 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:23:44.691468 | orchestrator | 2026-04-01 01:23:44 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:23:47.736503 | orchestrator | 2026-04-01 01:23:47 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:23:47.738482 | orchestrator | 2026-04-01 01:23:47 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:23:47.738618 | orchestrator | 2026-04-01 01:23:47 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:23:50.786369 | orchestrator | 2026-04-01 01:23:50 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:23:50.788527 | orchestrator | 2026-04-01 01:23:50 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:23:50.788598 | orchestrator | 2026-04-01 01:23:50 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:23:53.834232 | orchestrator | 2026-04-01 01:23:53 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:23:53.836969 | orchestrator | 2026-04-01 01:23:53 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:23:53.837022 | orchestrator | 2026-04-01 01:23:53 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:23:56.885653 | orchestrator | 2026-04-01 01:23:56 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:23:56.887538 | orchestrator | 2026-04-01 01:23:56 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:23:56.887646 | orchestrator | 2026-04-01 01:23:56 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:23:59.937785 | orchestrator | 2026-04-01 01:23:59 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:23:59.940176 | orchestrator | 2026-04-01 01:23:59 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:23:59.940219 | orchestrator | 2026-04-01 01:23:59 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:24:02.985665 | orchestrator | 2026-04-01 01:24:02 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:24:02.987613 | orchestrator | 2026-04-01 01:24:02 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:24:02.987717 | orchestrator | 2026-04-01 01:24:02 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:24:06.036854 | orchestrator | 2026-04-01 01:24:06 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:24:06.039137 | orchestrator | 2026-04-01 01:24:06 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:24:06.039208 | orchestrator | 2026-04-01 01:24:06 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:24:09.090007 | orchestrator | 2026-04-01 01:24:09 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:24:09.092365 | orchestrator | 2026-04-01 01:24:09 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:24:09.092461 | orchestrator | 2026-04-01 01:24:09 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:24:12.158946 | orchestrator | 2026-04-01 01:24:12 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:24:12.160568 | orchestrator | 2026-04-01 01:24:12 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:24:12.160661 | orchestrator | 2026-04-01 01:24:12 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:24:15.207634 | orchestrator | 2026-04-01 01:24:15 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:24:15.219107 | orchestrator | 2026-04-01 01:24:15 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:24:15.219198 | orchestrator | 2026-04-01 01:24:15 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:24:18.262954 | orchestrator | 2026-04-01 01:24:18 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:24:18.264399 | orchestrator | 2026-04-01 01:24:18 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:24:18.264430 | orchestrator | 2026-04-01 01:24:18 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:24:21.312029 | orchestrator | 2026-04-01 01:24:21 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:24:21.313406 | orchestrator | 2026-04-01 01:24:21 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:24:21.313482 | orchestrator | 2026-04-01 01:24:21 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:24:24.357056 | orchestrator | 2026-04-01 01:24:24 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:24:24.359617 | orchestrator | 2026-04-01 01:24:24 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:24:24.359718 | orchestrator | 2026-04-01 01:24:24 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:24:27.407712 | orchestrator | 2026-04-01 01:24:27 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:24:27.409780 | orchestrator | 2026-04-01 01:24:27 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:24:27.409860 | orchestrator | 2026-04-01 01:24:27 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:24:30.458169 | orchestrator | 2026-04-01 01:24:30 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:24:30.459166 | orchestrator | 2026-04-01 01:24:30 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:24:30.459522 | orchestrator | 2026-04-01 01:24:30 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:24:33.504822 | orchestrator | 2026-04-01 01:24:33 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:24:33.506767 | orchestrator | 2026-04-01 01:24:33 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:24:33.506827 | orchestrator | 2026-04-01 01:24:33 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:24:36.553841 | orchestrator | 2026-04-01 01:24:36 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:24:36.556407 | orchestrator | 2026-04-01 01:24:36 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:24:36.556455 | orchestrator | 2026-04-01 01:24:36 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:24:39.597514 | orchestrator | 2026-04-01 01:24:39 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:24:39.598793 | orchestrator | 2026-04-01 01:24:39 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:24:39.598889 | orchestrator | 2026-04-01 01:24:39 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:24:42.649456 | orchestrator | 2026-04-01 01:24:42 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:24:42.651346 | orchestrator | 2026-04-01 01:24:42 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:24:42.651404 | orchestrator | 2026-04-01 01:24:42 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:24:45.698556 | orchestrator | 2026-04-01 01:24:45 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:24:45.699490 | orchestrator | 2026-04-01 01:24:45 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:24:45.699545 | orchestrator | 2026-04-01 01:24:45 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:24:48.749852 | orchestrator | 2026-04-01 01:24:48 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:24:48.751609 | orchestrator | 2026-04-01 01:24:48 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:24:48.751708 | orchestrator | 2026-04-01 01:24:48 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:24:51.799468 | orchestrator | 2026-04-01 01:24:51 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:24:51.801892 | orchestrator | 2026-04-01 01:24:51 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:24:51.802007 | orchestrator | 2026-04-01 01:24:51 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:24:54.850112 | orchestrator | 2026-04-01 01:24:54 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:24:54.853206 | orchestrator | 2026-04-01 01:24:54 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:24:54.853288 | orchestrator | 2026-04-01 01:24:54 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:24:57.901187 | orchestrator | 2026-04-01 01:24:57 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:24:57.903043 | orchestrator | 2026-04-01 01:24:57 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:24:57.904762 | orchestrator | 2026-04-01 01:24:57 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:25:00.947489 | orchestrator | 2026-04-01 01:25:00 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:25:00.950083 | orchestrator | 2026-04-01 01:25:00 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:25:00.950182 | orchestrator | 2026-04-01 01:25:00 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:25:03.998711 | orchestrator | 2026-04-01 01:25:04 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:25:04.000694 | orchestrator | 2026-04-01 01:25:04 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:25:04.000792 | orchestrator | 2026-04-01 01:25:04 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:25:07.048235 | orchestrator | 2026-04-01 01:25:07 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:25:07.050122 | orchestrator | 2026-04-01 01:25:07 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:25:07.050191 | orchestrator | 2026-04-01 01:25:07 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:25:10.100624 | orchestrator | 2026-04-01 01:25:10 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:25:10.102226 | orchestrator | 2026-04-01 01:25:10 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:25:10.102376 | orchestrator | 2026-04-01 01:25:10 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:25:13.153551 | orchestrator | 2026-04-01 01:25:13 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:25:13.155078 | orchestrator | 2026-04-01 01:25:13 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:25:13.155144 | orchestrator | 2026-04-01 01:25:13 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:25:16.203834 | orchestrator | 2026-04-01 01:25:16 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:25:16.204321 | orchestrator | 2026-04-01 01:25:16 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:25:16.204433 | orchestrator | 2026-04-01 01:25:16 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:25:19.247985 | orchestrator | 2026-04-01 01:25:19 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:25:19.250059 | orchestrator | 2026-04-01 01:25:19 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:25:19.250132 | orchestrator | 2026-04-01 01:25:19 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:25:22.295604 | orchestrator | 2026-04-01 01:25:22 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:25:22.297692 | orchestrator | 2026-04-01 01:25:22 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:25:22.297731 | orchestrator | 2026-04-01 01:25:22 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:25:25.343907 | orchestrator | 2026-04-01 01:25:25 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:25:25.345409 | orchestrator | 2026-04-01 01:25:25 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:25:25.345485 | orchestrator | 2026-04-01 01:25:25 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:25:28.396185 | orchestrator | 2026-04-01 01:25:28 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:25:28.397889 | orchestrator | 2026-04-01 01:25:28 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:25:28.397935 | orchestrator | 2026-04-01 01:25:28 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:25:31.447762 | orchestrator | 2026-04-01 01:25:31 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:25:31.449107 | orchestrator | 2026-04-01 01:25:31 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:25:31.449173 | orchestrator | 2026-04-01 01:25:31 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:25:34.495676 | orchestrator | 2026-04-01 01:25:34 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:25:34.498421 | orchestrator | 2026-04-01 01:25:34 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:25:34.498476 | orchestrator | 2026-04-01 01:25:34 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:25:37.548090 | orchestrator | 2026-04-01 01:25:37 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:25:37.550496 | orchestrator | 2026-04-01 01:25:37 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:25:37.550608 | orchestrator | 2026-04-01 01:25:37 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:25:40.598474 | orchestrator | 2026-04-01 01:25:40 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:25:40.600734 | orchestrator | 2026-04-01 01:25:40 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:25:40.600842 | orchestrator | 2026-04-01 01:25:40 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:25:43.649171 | orchestrator | 2026-04-01 01:25:43 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:25:43.650744 | orchestrator | 2026-04-01 01:25:43 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:25:43.650784 | orchestrator | 2026-04-01 01:25:43 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:25:46.697410 | orchestrator | 2026-04-01 01:25:46 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:25:46.699792 | orchestrator | 2026-04-01 01:25:46 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:25:46.699840 | orchestrator | 2026-04-01 01:25:46 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:25:49.745645 | orchestrator | 2026-04-01 01:25:49 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:25:49.747309 | orchestrator | 2026-04-01 01:25:49 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:25:49.747345 | orchestrator | 2026-04-01 01:25:49 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:25:52.795860 | orchestrator | 2026-04-01 01:25:52 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:25:52.799213 | orchestrator | 2026-04-01 01:25:52 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:25:52.799285 | orchestrator | 2026-04-01 01:25:52 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:25:55.848059 | orchestrator | 2026-04-01 01:25:55 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:25:55.850332 | orchestrator | 2026-04-01 01:25:55 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:25:55.850517 | orchestrator | 2026-04-01 01:25:55 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:25:58.896049 | orchestrator | 2026-04-01 01:25:58 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:25:58.898429 | orchestrator | 2026-04-01 01:25:58 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:25:58.898500 | orchestrator | 2026-04-01 01:25:58 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:26:01.948995 | orchestrator | 2026-04-01 01:26:01 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:26:01.951126 | orchestrator | 2026-04-01 01:26:01 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:26:01.951207 | orchestrator | 2026-04-01 01:26:01 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:26:04.999447 | orchestrator | 2026-04-01 01:26:05 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:26:05.002094 | orchestrator | 2026-04-01 01:26:05 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:26:05.002153 | orchestrator | 2026-04-01 01:26:05 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:26:08.051145 | orchestrator | 2026-04-01 01:26:08 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:26:08.051875 | orchestrator | 2026-04-01 01:26:08 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:26:08.051931 | orchestrator | 2026-04-01 01:26:08 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:26:11.088402 | orchestrator | 2026-04-01 01:26:11 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:26:11.088871 | orchestrator | 2026-04-01 01:26:11 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:26:11.088895 | orchestrator | 2026-04-01 01:26:11 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:26:14.131622 | orchestrator | 2026-04-01 01:26:14 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:26:14.133130 | orchestrator | 2026-04-01 01:26:14 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:26:14.133318 | orchestrator | 2026-04-01 01:26:14 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:26:17.181313 | orchestrator | 2026-04-01 01:26:17 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:26:17.182685 | orchestrator | 2026-04-01 01:26:17 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:26:17.182744 | orchestrator | 2026-04-01 01:26:17 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:26:20.230519 | orchestrator | 2026-04-01 01:26:20 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:26:20.232769 | orchestrator | 2026-04-01 01:26:20 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:26:20.232874 | orchestrator | 2026-04-01 01:26:20 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:26:23.280199 | orchestrator | 2026-04-01 01:26:23 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:26:23.281947 | orchestrator | 2026-04-01 01:26:23 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:26:23.282002 | orchestrator | 2026-04-01 01:26:23 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:26:26.331582 | orchestrator | 2026-04-01 01:26:26 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:26:26.334533 | orchestrator | 2026-04-01 01:26:26 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:26:26.334583 | orchestrator | 2026-04-01 01:26:26 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:26:29.382720 | orchestrator | 2026-04-01 01:26:29 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:26:29.384024 | orchestrator | 2026-04-01 01:26:29 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:26:29.384149 | orchestrator | 2026-04-01 01:26:29 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:26:32.429791 | orchestrator | 2026-04-01 01:26:32 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:26:32.431433 | orchestrator | 2026-04-01 01:26:32 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:26:32.431493 | orchestrator | 2026-04-01 01:26:32 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:26:35.477084 | orchestrator | 2026-04-01 01:26:35 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:26:35.478610 | orchestrator | 2026-04-01 01:26:35 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:26:35.478654 | orchestrator | 2026-04-01 01:26:35 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:26:38.523659 | orchestrator | 2026-04-01 01:26:38 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:26:38.525290 | orchestrator | 2026-04-01 01:26:38 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:26:38.525349 | orchestrator | 2026-04-01 01:26:38 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:26:41.574186 | orchestrator | 2026-04-01 01:26:41 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:26:41.576155 | orchestrator | 2026-04-01 01:26:41 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:26:41.576327 | orchestrator | 2026-04-01 01:26:41 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:26:44.620770 | orchestrator | 2026-04-01 01:26:44 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:26:44.620960 | orchestrator | 2026-04-01 01:26:44 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:26:44.620980 | orchestrator | 2026-04-01 01:26:44 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:26:47.671088 | orchestrator | 2026-04-01 01:26:47 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:26:47.673082 | orchestrator | 2026-04-01 01:26:47 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:26:47.673120 | orchestrator | 2026-04-01 01:26:47 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:26:50.715411 | orchestrator | 2026-04-01 01:26:50 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:26:50.716532 | orchestrator | 2026-04-01 01:26:50 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:26:50.716584 | orchestrator | 2026-04-01 01:26:50 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:26:53.764208 | orchestrator | 2026-04-01 01:26:53 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:26:53.765994 | orchestrator | 2026-04-01 01:26:53 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:26:53.766081 | orchestrator | 2026-04-01 01:26:53 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:26:56.813191 | orchestrator | 2026-04-01 01:26:56 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:26:56.815857 | orchestrator | 2026-04-01 01:26:56 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:26:56.815927 | orchestrator | 2026-04-01 01:26:56 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:26:59.858140 | orchestrator | 2026-04-01 01:26:59 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:26:59.859990 | orchestrator | 2026-04-01 01:26:59 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:26:59.860117 | orchestrator | 2026-04-01 01:26:59 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:27:02.906220 | orchestrator | 2026-04-01 01:27:02 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:27:02.908788 | orchestrator | 2026-04-01 01:27:02 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:27:02.908841 | orchestrator | 2026-04-01 01:27:02 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:27:05.956416 | orchestrator | 2026-04-01 01:27:05 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:27:05.957754 | orchestrator | 2026-04-01 01:27:05 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:27:05.957890 | orchestrator | 2026-04-01 01:27:05 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:27:09.005619 | orchestrator | 2026-04-01 01:27:09 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:27:09.007320 | orchestrator | 2026-04-01 01:27:09 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:27:09.007405 | orchestrator | 2026-04-01 01:27:09 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:27:12.052339 | orchestrator | 2026-04-01 01:27:12 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:27:12.053527 | orchestrator | 2026-04-01 01:27:12 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:27:12.053634 | orchestrator | 2026-04-01 01:27:12 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:27:15.098605 | orchestrator | 2026-04-01 01:27:15 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:27:15.100014 | orchestrator | 2026-04-01 01:27:15 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:27:15.100108 | orchestrator | 2026-04-01 01:27:15 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:27:18.152637 | orchestrator | 2026-04-01 01:27:18 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:27:18.154924 | orchestrator | 2026-04-01 01:27:18 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:27:18.154975 | orchestrator | 2026-04-01 01:27:18 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:27:21.202149 | orchestrator | 2026-04-01 01:27:21 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:27:21.203925 | orchestrator | 2026-04-01 01:27:21 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:27:21.203979 | orchestrator | 2026-04-01 01:27:21 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:27:24.251465 | orchestrator | 2026-04-01 01:27:24 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:27:24.252992 | orchestrator | 2026-04-01 01:27:24 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:27:24.253390 | orchestrator | 2026-04-01 01:27:24 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:27:27.295838 | orchestrator | 2026-04-01 01:27:27 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:27:27.297216 | orchestrator | 2026-04-01 01:27:27 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:27:27.297472 | orchestrator | 2026-04-01 01:27:27 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:27:30.346820 | orchestrator | 2026-04-01 01:27:30 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:27:30.347904 | orchestrator | 2026-04-01 01:27:30 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:27:30.347968 | orchestrator | 2026-04-01 01:27:30 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:27:33.401487 | orchestrator | 2026-04-01 01:27:33 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:27:33.403510 | orchestrator | 2026-04-01 01:27:33 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:27:33.403564 | orchestrator | 2026-04-01 01:27:33 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:27:36.452433 | orchestrator | 2026-04-01 01:27:36 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:27:36.455287 | orchestrator | 2026-04-01 01:27:36 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:27:36.455345 | orchestrator | 2026-04-01 01:27:36 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:27:39.505956 | orchestrator | 2026-04-01 01:27:39 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:27:39.508099 | orchestrator | 2026-04-01 01:27:39 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:27:39.508155 | orchestrator | 2026-04-01 01:27:39 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:27:42.557190 | orchestrator | 2026-04-01 01:27:42 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:27:42.558541 | orchestrator | 2026-04-01 01:27:42 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:27:42.558592 | orchestrator | 2026-04-01 01:27:42 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:27:45.607139 | orchestrator | 2026-04-01 01:27:45 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:27:45.608952 | orchestrator | 2026-04-01 01:27:45 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:27:45.645982 | orchestrator | 2026-04-01 01:27:45 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:27:48.657338 | orchestrator | 2026-04-01 01:27:48 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:27:48.659283 | orchestrator | 2026-04-01 01:27:48 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:27:48.659624 | orchestrator | 2026-04-01 01:27:48 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:27:51.709132 | orchestrator | 2026-04-01 01:27:51 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:27:51.710944 | orchestrator | 2026-04-01 01:27:51 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:27:51.711005 | orchestrator | 2026-04-01 01:27:51 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:27:54.754330 | orchestrator | 2026-04-01 01:27:54 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:27:54.756349 | orchestrator | 2026-04-01 01:27:54 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:27:54.756399 | orchestrator | 2026-04-01 01:27:54 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:27:57.801284 | orchestrator | 2026-04-01 01:27:57 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:27:57.802633 | orchestrator | 2026-04-01 01:27:57 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:27:57.802678 | orchestrator | 2026-04-01 01:27:57 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:28:00.851493 | orchestrator | 2026-04-01 01:28:00 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:28:00.853336 | orchestrator | 2026-04-01 01:28:00 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:28:00.853379 | orchestrator | 2026-04-01 01:28:00 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:28:03.903320 | orchestrator | 2026-04-01 01:28:03 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:28:03.906902 | orchestrator | 2026-04-01 01:28:03 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:28:03.906970 | orchestrator | 2026-04-01 01:28:03 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:28:06.954006 | orchestrator | 2026-04-01 01:28:06 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:28:06.956112 | orchestrator | 2026-04-01 01:28:06 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:28:06.956201 | orchestrator | 2026-04-01 01:28:06 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:28:10.008940 | orchestrator | 2026-04-01 01:28:10 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:28:10.010709 | orchestrator | 2026-04-01 01:28:10 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:28:10.010834 | orchestrator | 2026-04-01 01:28:10 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:28:13.056152 | orchestrator | 2026-04-01 01:28:13 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:28:13.058296 | orchestrator | 2026-04-01 01:28:13 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:28:13.058366 | orchestrator | 2026-04-01 01:28:13 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:28:16.107904 | orchestrator | 2026-04-01 01:28:16 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:28:16.110207 | orchestrator | 2026-04-01 01:28:16 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:28:16.110792 | orchestrator | 2026-04-01 01:28:16 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:28:19.164477 | orchestrator | 2026-04-01 01:28:19 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:28:19.166565 | orchestrator | 2026-04-01 01:28:19 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:28:19.166871 | orchestrator | 2026-04-01 01:28:19 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:28:22.210735 | orchestrator | 2026-04-01 01:28:22 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:28:22.212245 | orchestrator | 2026-04-01 01:28:22 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:28:22.212313 | orchestrator | 2026-04-01 01:28:22 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:28:25.257772 | orchestrator | 2026-04-01 01:28:25 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:28:25.259350 | orchestrator | 2026-04-01 01:28:25 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:28:25.259384 | orchestrator | 2026-04-01 01:28:25 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:28:28.309361 | orchestrator | 2026-04-01 01:28:28 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:28:28.311388 | orchestrator | 2026-04-01 01:28:28 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:28:28.311440 | orchestrator | 2026-04-01 01:28:28 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:28:31.355761 | orchestrator | 2026-04-01 01:28:31 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:28:31.358216 | orchestrator | 2026-04-01 01:28:31 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:28:31.358317 | orchestrator | 2026-04-01 01:28:31 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:28:34.402739 | orchestrator | 2026-04-01 01:28:34 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:28:34.404616 | orchestrator | 2026-04-01 01:28:34 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:28:34.404692 | orchestrator | 2026-04-01 01:28:34 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:28:37.449397 | orchestrator | 2026-04-01 01:28:37 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:28:37.451656 | orchestrator | 2026-04-01 01:28:37 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:28:37.451834 | orchestrator | 2026-04-01 01:28:37 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:28:40.496718 | orchestrator | 2026-04-01 01:28:40 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:28:40.498132 | orchestrator | 2026-04-01 01:28:40 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:28:40.498523 | orchestrator | 2026-04-01 01:28:40 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:28:43.547827 | orchestrator | 2026-04-01 01:28:43 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:28:43.548983 | orchestrator | 2026-04-01 01:28:43 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:28:43.549004 | orchestrator | 2026-04-01 01:28:43 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:28:46.599279 | orchestrator | 2026-04-01 01:28:46 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:28:46.602133 | orchestrator | 2026-04-01 01:28:46 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:28:46.602229 | orchestrator | 2026-04-01 01:28:46 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:28:49.656305 | orchestrator | 2026-04-01 01:28:49 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:28:49.658220 | orchestrator | 2026-04-01 01:28:49 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:28:49.658332 | orchestrator | 2026-04-01 01:28:49 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:28:52.704057 | orchestrator | 2026-04-01 01:28:52 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:28:52.708563 | orchestrator | 2026-04-01 01:28:52 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:28:52.708670 | orchestrator | 2026-04-01 01:28:52 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:28:55.764183 | orchestrator | 2026-04-01 01:28:55 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:28:55.765953 | orchestrator | 2026-04-01 01:28:55 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:28:55.766077 | orchestrator | 2026-04-01 01:28:55 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:28:58.817617 | orchestrator | 2026-04-01 01:28:58 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:28:58.819287 | orchestrator | 2026-04-01 01:28:58 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:28:58.819370 | orchestrator | 2026-04-01 01:28:58 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:29:01.869215 | orchestrator | 2026-04-01 01:29:01 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:29:01.871564 | orchestrator | 2026-04-01 01:29:01 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:29:01.871647 | orchestrator | 2026-04-01 01:29:01 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:29:04.920608 | orchestrator | 2026-04-01 01:29:04 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:29:04.922218 | orchestrator | 2026-04-01 01:29:04 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:29:04.922322 | orchestrator | 2026-04-01 01:29:04 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:29:07.973137 | orchestrator | 2026-04-01 01:29:07 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:29:07.975081 | orchestrator | 2026-04-01 01:29:07 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:29:07.975161 | orchestrator | 2026-04-01 01:29:07 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:29:11.027016 | orchestrator | 2026-04-01 01:29:11 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:29:11.028757 | orchestrator | 2026-04-01 01:29:11 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:29:11.028817 | orchestrator | 2026-04-01 01:29:11 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:29:14.082363 | orchestrator | 2026-04-01 01:29:14 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:29:14.084394 | orchestrator | 2026-04-01 01:29:14 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:29:14.084440 | orchestrator | 2026-04-01 01:29:14 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:29:17.135492 | orchestrator | 2026-04-01 01:29:17 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:29:17.137804 | orchestrator | 2026-04-01 01:29:17 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:29:17.137884 | orchestrator | 2026-04-01 01:29:17 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:29:20.189583 | orchestrator | 2026-04-01 01:29:20 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:29:20.192283 | orchestrator | 2026-04-01 01:29:20 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:29:20.192344 | orchestrator | 2026-04-01 01:29:20 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:29:23.236748 | orchestrator | 2026-04-01 01:29:23 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:29:23.237921 | orchestrator | 2026-04-01 01:29:23 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:29:23.238084 | orchestrator | 2026-04-01 01:29:23 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:29:26.282418 | orchestrator | 2026-04-01 01:29:26 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:29:26.283815 | orchestrator | 2026-04-01 01:29:26 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:29:26.283846 | orchestrator | 2026-04-01 01:29:26 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:29:29.326952 | orchestrator | 2026-04-01 01:29:29 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:29:29.330206 | orchestrator | 2026-04-01 01:29:29 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:29:29.330275 | orchestrator | 2026-04-01 01:29:29 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:29:32.379322 | orchestrator | 2026-04-01 01:29:32 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:29:32.380841 | orchestrator | 2026-04-01 01:29:32 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:29:32.381015 | orchestrator | 2026-04-01 01:29:32 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:29:35.431035 | orchestrator | 2026-04-01 01:29:35 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:29:35.432574 | orchestrator | 2026-04-01 01:29:35 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:29:35.432640 | orchestrator | 2026-04-01 01:29:35 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:29:38.480978 | orchestrator | 2026-04-01 01:29:38 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:29:38.484218 | orchestrator | 2026-04-01 01:29:38 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:29:38.484295 | orchestrator | 2026-04-01 01:29:38 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:29:41.539894 | orchestrator | 2026-04-01 01:29:41 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:29:41.540090 | orchestrator | 2026-04-01 01:29:41 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:29:41.540248 | orchestrator | 2026-04-01 01:29:41 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:29:44.588508 | orchestrator | 2026-04-01 01:29:44 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:29:44.590716 | orchestrator | 2026-04-01 01:29:44 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:29:44.590788 | orchestrator | 2026-04-01 01:29:44 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:29:47.640681 | orchestrator | 2026-04-01 01:29:47 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:29:47.642880 | orchestrator | 2026-04-01 01:29:47 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:29:47.643009 | orchestrator | 2026-04-01 01:29:47 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:29:50.693618 | orchestrator | 2026-04-01 01:29:50 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:29:50.695055 | orchestrator | 2026-04-01 01:29:50 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:29:50.695105 | orchestrator | 2026-04-01 01:29:50 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:29:53.742526 | orchestrator | 2026-04-01 01:29:53 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:29:53.744147 | orchestrator | 2026-04-01 01:29:53 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:29:53.744211 | orchestrator | 2026-04-01 01:29:53 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:29:56.793960 | orchestrator | 2026-04-01 01:29:56 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:29:56.795869 | orchestrator | 2026-04-01 01:29:56 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:29:56.796070 | orchestrator | 2026-04-01 01:29:56 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:29:59.843346 | orchestrator | 2026-04-01 01:29:59 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:29:59.845570 | orchestrator | 2026-04-01 01:29:59 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:29:59.845654 | orchestrator | 2026-04-01 01:29:59 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:30:02.891673 | orchestrator | 2026-04-01 01:30:02 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:30:02.892846 | orchestrator | 2026-04-01 01:30:02 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:30:02.892919 | orchestrator | 2026-04-01 01:30:02 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:30:05.945081 | orchestrator | 2026-04-01 01:30:05 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:30:05.947881 | orchestrator | 2026-04-01 01:30:05 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:30:05.947941 | orchestrator | 2026-04-01 01:30:05 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:30:08.998243 | orchestrator | 2026-04-01 01:30:09 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:30:09.000147 | orchestrator | 2026-04-01 01:30:09 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:30:09.000196 | orchestrator | 2026-04-01 01:30:09 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:30:12.048810 | orchestrator | 2026-04-01 01:30:12 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:30:12.050708 | orchestrator | 2026-04-01 01:30:12 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:30:12.050829 | orchestrator | 2026-04-01 01:30:12 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:30:15.097217 | orchestrator | 2026-04-01 01:30:15 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:30:15.098171 | orchestrator | 2026-04-01 01:30:15 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:30:15.098228 | orchestrator | 2026-04-01 01:30:15 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:30:18.145381 | orchestrator | 2026-04-01 01:30:18 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:30:18.146957 | orchestrator | 2026-04-01 01:30:18 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:30:18.147003 | orchestrator | 2026-04-01 01:30:18 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:30:21.193365 | orchestrator | 2026-04-01 01:30:21 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:30:21.194811 | orchestrator | 2026-04-01 01:30:21 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:30:21.194859 | orchestrator | 2026-04-01 01:30:21 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:30:24.243038 | orchestrator | 2026-04-01 01:30:24 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:30:24.245014 | orchestrator | 2026-04-01 01:30:24 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:30:24.245053 | orchestrator | 2026-04-01 01:30:24 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:30:27.288495 | orchestrator | 2026-04-01 01:30:27 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:30:27.290606 | orchestrator | 2026-04-01 01:30:27 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:30:27.290735 | orchestrator | 2026-04-01 01:30:27 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:30:30.336943 | orchestrator | 2026-04-01 01:30:30 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:30:30.338757 | orchestrator | 2026-04-01 01:30:30 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:30:30.338836 | orchestrator | 2026-04-01 01:30:30 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:30:33.384782 | orchestrator | 2026-04-01 01:30:33 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:30:33.386468 | orchestrator | 2026-04-01 01:30:33 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:30:33.386541 | orchestrator | 2026-04-01 01:30:33 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:30:36.435371 | orchestrator | 2026-04-01 01:30:36 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:30:36.436613 | orchestrator | 2026-04-01 01:30:36 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:30:36.436695 | orchestrator | 2026-04-01 01:30:36 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:30:39.482277 | orchestrator | 2026-04-01 01:30:39 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:30:39.483610 | orchestrator | 2026-04-01 01:30:39 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:30:39.483673 | orchestrator | 2026-04-01 01:30:39 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:30:42.529078 | orchestrator | 2026-04-01 01:30:42 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:30:42.530162 | orchestrator | 2026-04-01 01:30:42 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:30:42.530207 | orchestrator | 2026-04-01 01:30:42 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:30:45.576983 | orchestrator | 2026-04-01 01:30:45 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:30:45.578859 | orchestrator | 2026-04-01 01:30:45 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:30:45.578904 | orchestrator | 2026-04-01 01:30:45 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:30:48.621993 | orchestrator | 2026-04-01 01:30:48 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:30:48.624044 | orchestrator | 2026-04-01 01:30:48 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:30:48.624146 | orchestrator | 2026-04-01 01:30:48 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:30:51.671447 | orchestrator | 2026-04-01 01:30:51 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:30:51.672944 | orchestrator | 2026-04-01 01:30:51 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:30:51.673083 | orchestrator | 2026-04-01 01:30:51 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:30:54.716947 | orchestrator | 2026-04-01 01:30:54 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:30:54.720339 | orchestrator | 2026-04-01 01:30:54 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:30:54.720392 | orchestrator | 2026-04-01 01:30:54 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:30:57.768193 | orchestrator | 2026-04-01 01:30:57 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:30:57.772794 | orchestrator | 2026-04-01 01:30:57 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:30:57.772905 | orchestrator | 2026-04-01 01:30:57 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:31:00.824536 | orchestrator | 2026-04-01 01:31:00 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:31:00.825683 | orchestrator | 2026-04-01 01:31:00 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:31:00.825749 | orchestrator | 2026-04-01 01:31:00 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:31:03.875423 | orchestrator | 2026-04-01 01:31:03 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:31:03.878715 | orchestrator | 2026-04-01 01:31:03 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:31:03.879006 | orchestrator | 2026-04-01 01:31:03 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:31:06.934879 | orchestrator | 2026-04-01 01:31:06 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:31:06.935313 | orchestrator | 2026-04-01 01:31:06 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:31:06.935816 | orchestrator | 2026-04-01 01:31:06 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:31:09.984839 | orchestrator | 2026-04-01 01:31:09 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:31:09.987577 | orchestrator | 2026-04-01 01:31:09 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:31:09.987660 | orchestrator | 2026-04-01 01:31:09 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:31:13.031888 | orchestrator | 2026-04-01 01:31:13 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:31:13.033466 | orchestrator | 2026-04-01 01:31:13 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:31:13.033567 | orchestrator | 2026-04-01 01:31:13 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:31:16.080083 | orchestrator | 2026-04-01 01:31:16 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:31:16.080393 | orchestrator | 2026-04-01 01:31:16 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:31:16.080411 | orchestrator | 2026-04-01 01:31:16 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:31:19.128362 | orchestrator | 2026-04-01 01:31:19 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:31:19.130358 | orchestrator | 2026-04-01 01:31:19 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:31:19.130422 | orchestrator | 2026-04-01 01:31:19 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:31:22.175828 | orchestrator | 2026-04-01 01:31:22 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:31:22.176547 | orchestrator | 2026-04-01 01:31:22 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:31:22.176587 | orchestrator | 2026-04-01 01:31:22 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:31:25.223175 | orchestrator | 2026-04-01 01:31:25 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:31:25.225348 | orchestrator | 2026-04-01 01:31:25 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:31:25.225397 | orchestrator | 2026-04-01 01:31:25 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:31:28.271764 | orchestrator | 2026-04-01 01:31:28 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:31:28.272852 | orchestrator | 2026-04-01 01:31:28 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:31:28.272898 | orchestrator | 2026-04-01 01:31:28 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:31:31.317981 | orchestrator | 2026-04-01 01:31:31 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:31:31.319219 | orchestrator | 2026-04-01 01:31:31 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:31:31.319280 | orchestrator | 2026-04-01 01:31:31 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:31:34.364974 | orchestrator | 2026-04-01 01:31:34 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:31:34.367195 | orchestrator | 2026-04-01 01:31:34 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:31:34.367461 | orchestrator | 2026-04-01 01:31:34 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:31:37.411907 | orchestrator | 2026-04-01 01:31:37 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:31:37.412609 | orchestrator | 2026-04-01 01:31:37 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:31:37.412652 | orchestrator | 2026-04-01 01:31:37 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:31:40.464815 | orchestrator | 2026-04-01 01:31:40 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:31:40.467062 | orchestrator | 2026-04-01 01:31:40 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:31:40.467123 | orchestrator | 2026-04-01 01:31:40 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:31:43.510674 | orchestrator | 2026-04-01 01:31:43 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:31:43.511935 | orchestrator | 2026-04-01 01:31:43 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:31:43.512050 | orchestrator | 2026-04-01 01:31:43 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:31:46.556743 | orchestrator | 2026-04-01 01:31:46 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:31:46.558380 | orchestrator | 2026-04-01 01:31:46 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:31:46.558438 | orchestrator | 2026-04-01 01:31:46 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:31:49.600718 | orchestrator | 2026-04-01 01:31:49 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:31:49.602564 | orchestrator | 2026-04-01 01:31:49 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:31:49.602658 | orchestrator | 2026-04-01 01:31:49 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:31:52.648864 | orchestrator | 2026-04-01 01:31:52 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:31:52.651011 | orchestrator | 2026-04-01 01:31:52 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:31:52.651084 | orchestrator | 2026-04-01 01:31:52 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:31:55.695423 | orchestrator | 2026-04-01 01:31:55 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:31:55.696407 | orchestrator | 2026-04-01 01:31:55 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:31:55.696447 | orchestrator | 2026-04-01 01:31:55 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:31:58.741729 | orchestrator | 2026-04-01 01:31:58 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:31:58.743735 | orchestrator | 2026-04-01 01:31:58 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:31:58.743821 | orchestrator | 2026-04-01 01:31:58 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:32:01.791048 | orchestrator | 2026-04-01 01:32:01 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:32:01.792990 | orchestrator | 2026-04-01 01:32:01 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:32:01.793126 | orchestrator | 2026-04-01 01:32:01 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:32:04.840765 | orchestrator | 2026-04-01 01:32:04 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:32:04.844421 | orchestrator | 2026-04-01 01:32:04 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:32:04.844546 | orchestrator | 2026-04-01 01:32:04 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:32:07.893855 | orchestrator | 2026-04-01 01:32:07 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:32:07.897229 | orchestrator | 2026-04-01 01:32:07 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:32:07.897328 | orchestrator | 2026-04-01 01:32:07 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:32:10.947449 | orchestrator | 2026-04-01 01:32:10 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:32:10.948132 | orchestrator | 2026-04-01 01:32:10 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:32:10.948176 | orchestrator | 2026-04-01 01:32:10 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:32:13.999681 | orchestrator | 2026-04-01 01:32:14 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:32:14.001817 | orchestrator | 2026-04-01 01:32:14 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:32:14.001888 | orchestrator | 2026-04-01 01:32:14 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:32:17.043212 | orchestrator | 2026-04-01 01:32:17 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:32:17.044906 | orchestrator | 2026-04-01 01:32:17 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:32:17.044985 | orchestrator | 2026-04-01 01:32:17 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:32:20.094004 | orchestrator | 2026-04-01 01:32:20 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:32:20.096901 | orchestrator | 2026-04-01 01:32:20 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:32:20.096966 | orchestrator | 2026-04-01 01:32:20 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:32:23.150798 | orchestrator | 2026-04-01 01:32:23 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:32:23.152645 | orchestrator | 2026-04-01 01:32:23 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:32:23.152717 | orchestrator | 2026-04-01 01:32:23 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:32:26.201651 | orchestrator | 2026-04-01 01:32:26 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:32:26.203492 | orchestrator | 2026-04-01 01:32:26 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:32:26.203553 | orchestrator | 2026-04-01 01:32:26 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:32:29.247951 | orchestrator | 2026-04-01 01:32:29 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:32:29.248049 | orchestrator | 2026-04-01 01:32:29 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:32:29.248061 | orchestrator | 2026-04-01 01:32:29 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:32:32.290388 | orchestrator | 2026-04-01 01:32:32 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:32:32.292287 | orchestrator | 2026-04-01 01:32:32 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:32:32.292374 | orchestrator | 2026-04-01 01:32:32 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:32:35.336373 | orchestrator | 2026-04-01 01:32:35 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:32:35.338086 | orchestrator | 2026-04-01 01:32:35 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:32:35.338145 | orchestrator | 2026-04-01 01:32:35 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:32:38.380611 | orchestrator | 2026-04-01 01:32:38 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:32:38.382644 | orchestrator | 2026-04-01 01:32:38 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:32:38.382746 | orchestrator | 2026-04-01 01:32:38 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:32:41.435727 | orchestrator | 2026-04-01 01:32:41 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:32:41.438604 | orchestrator | 2026-04-01 01:32:41 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:32:41.438708 | orchestrator | 2026-04-01 01:32:41 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:32:44.490592 | orchestrator | 2026-04-01 01:32:44 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:32:44.492823 | orchestrator | 2026-04-01 01:32:44 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:32:44.492883 | orchestrator | 2026-04-01 01:32:44 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:32:47.536583 | orchestrator | 2026-04-01 01:32:47 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:32:47.538080 | orchestrator | 2026-04-01 01:32:47 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:32:47.538144 | orchestrator | 2026-04-01 01:32:47 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:32:50.584952 | orchestrator | 2026-04-01 01:32:50 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:32:50.586631 | orchestrator | 2026-04-01 01:32:50 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:32:50.586719 | orchestrator | 2026-04-01 01:32:50 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:32:53.637137 | orchestrator | 2026-04-01 01:32:53 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:32:53.638747 | orchestrator | 2026-04-01 01:32:53 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:32:53.638798 | orchestrator | 2026-04-01 01:32:53 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:32:56.685363 | orchestrator | 2026-04-01 01:32:56 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:32:56.687500 | orchestrator | 2026-04-01 01:32:56 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:32:56.687555 | orchestrator | 2026-04-01 01:32:56 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:32:59.732918 | orchestrator | 2026-04-01 01:32:59 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:32:59.734918 | orchestrator | 2026-04-01 01:32:59 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:32:59.734978 | orchestrator | 2026-04-01 01:32:59 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:33:02.780351 | orchestrator | 2026-04-01 01:33:02 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:33:02.781780 | orchestrator | 2026-04-01 01:33:02 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:33:02.781832 | orchestrator | 2026-04-01 01:33:02 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:33:05.824952 | orchestrator | 2026-04-01 01:33:05 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:33:05.826338 | orchestrator | 2026-04-01 01:33:05 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:33:05.826369 | orchestrator | 2026-04-01 01:33:05 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:33:08.880257 | orchestrator | 2026-04-01 01:33:08 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:33:08.881735 | orchestrator | 2026-04-01 01:33:08 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:33:08.881812 | orchestrator | 2026-04-01 01:33:08 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:33:11.934928 | orchestrator | 2026-04-01 01:33:11 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:33:11.936610 | orchestrator | 2026-04-01 01:33:11 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:33:11.936672 | orchestrator | 2026-04-01 01:33:11 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:33:14.984562 | orchestrator | 2026-04-01 01:33:14 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:33:14.986754 | orchestrator | 2026-04-01 01:33:14 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:33:14.986849 | orchestrator | 2026-04-01 01:33:14 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:33:18.041030 | orchestrator | 2026-04-01 01:33:18 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:33:18.041996 | orchestrator | 2026-04-01 01:33:18 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:33:18.042083 | orchestrator | 2026-04-01 01:33:18 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:33:21.092716 | orchestrator | 2026-04-01 01:33:21 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:33:21.094233 | orchestrator | 2026-04-01 01:33:21 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:33:21.094289 | orchestrator | 2026-04-01 01:33:21 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:33:24.149472 | orchestrator | 2026-04-01 01:33:24 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:33:24.150877 | orchestrator | 2026-04-01 01:33:24 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:33:24.150931 | orchestrator | 2026-04-01 01:33:24 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:33:27.203278 | orchestrator | 2026-04-01 01:33:27 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:33:27.204913 | orchestrator | 2026-04-01 01:33:27 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:33:27.204979 | orchestrator | 2026-04-01 01:33:27 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:33:30.255989 | orchestrator | 2026-04-01 01:33:30 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:33:30.256350 | orchestrator | 2026-04-01 01:33:30 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:33:30.256376 | orchestrator | 2026-04-01 01:33:30 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:33:33.310122 | orchestrator | 2026-04-01 01:33:33 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:33:33.312879 | orchestrator | 2026-04-01 01:33:33 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:33:33.312978 | orchestrator | 2026-04-01 01:33:33 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:33:36.359391 | orchestrator | 2026-04-01 01:33:36 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:33:36.362455 | orchestrator | 2026-04-01 01:33:36 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:33:36.362540 | orchestrator | 2026-04-01 01:33:36 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:33:39.414355 | orchestrator | 2026-04-01 01:33:39 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:33:39.417936 | orchestrator | 2026-04-01 01:33:39 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:33:39.418077 | orchestrator | 2026-04-01 01:33:39 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:33:42.464569 | orchestrator | 2026-04-01 01:33:42 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:33:42.467491 | orchestrator | 2026-04-01 01:33:42 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:33:42.467539 | orchestrator | 2026-04-01 01:33:42 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:33:45.513420 | orchestrator | 2026-04-01 01:33:45 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:33:45.515464 | orchestrator | 2026-04-01 01:33:45 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:33:45.515576 | orchestrator | 2026-04-01 01:33:45 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:33:48.564699 | orchestrator | 2026-04-01 01:33:48 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:33:48.566651 | orchestrator | 2026-04-01 01:33:48 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:33:48.566715 | orchestrator | 2026-04-01 01:33:48 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:33:51.602619 | orchestrator | 2026-04-01 01:33:51 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:33:51.604142 | orchestrator | 2026-04-01 01:33:51 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:33:51.604194 | orchestrator | 2026-04-01 01:33:51 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:33:54.654807 | orchestrator | 2026-04-01 01:33:54 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:33:54.655748 | orchestrator | 2026-04-01 01:33:54 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:33:54.655805 | orchestrator | 2026-04-01 01:33:54 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:33:57.705620 | orchestrator | 2026-04-01 01:33:57 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:33:57.706663 | orchestrator | 2026-04-01 01:33:57 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:33:57.706718 | orchestrator | 2026-04-01 01:33:57 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:34:00.752832 | orchestrator | 2026-04-01 01:34:00 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:34:00.755113 | orchestrator | 2026-04-01 01:34:00 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:34:00.755293 | orchestrator | 2026-04-01 01:34:00 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:34:03.798148 | orchestrator | 2026-04-01 01:34:03 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:34:03.799656 | orchestrator | 2026-04-01 01:34:03 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:34:03.799727 | orchestrator | 2026-04-01 01:34:03 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:34:06.852329 | orchestrator | 2026-04-01 01:34:06 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:34:06.854745 | orchestrator | 2026-04-01 01:34:06 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:34:06.854830 | orchestrator | 2026-04-01 01:34:06 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:34:09.896700 | orchestrator | 2026-04-01 01:34:09 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:34:09.899019 | orchestrator | 2026-04-01 01:34:09 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:34:09.899099 | orchestrator | 2026-04-01 01:34:09 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:34:12.946837 | orchestrator | 2026-04-01 01:34:12 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:34:12.948783 | orchestrator | 2026-04-01 01:34:12 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:34:12.948825 | orchestrator | 2026-04-01 01:34:12 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:34:15.991440 | orchestrator | 2026-04-01 01:34:15 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:34:15.994413 | orchestrator | 2026-04-01 01:34:15 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:34:15.994524 | orchestrator | 2026-04-01 01:34:15 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:34:19.040734 | orchestrator | 2026-04-01 01:34:19 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:34:19.042723 | orchestrator | 2026-04-01 01:34:19 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:34:19.042806 | orchestrator | 2026-04-01 01:34:19 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:34:22.092371 | orchestrator | 2026-04-01 01:34:22 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:34:22.094396 | orchestrator | 2026-04-01 01:34:22 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:34:22.094474 | orchestrator | 2026-04-01 01:34:22 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:34:25.145206 | orchestrator | 2026-04-01 01:34:25 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:34:25.146882 | orchestrator | 2026-04-01 01:34:25 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:34:25.146993 | orchestrator | 2026-04-01 01:34:25 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:34:28.192944 | orchestrator | 2026-04-01 01:34:28 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:34:28.194916 | orchestrator | 2026-04-01 01:34:28 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:34:28.194980 | orchestrator | 2026-04-01 01:34:28 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:34:31.241531 | orchestrator | 2026-04-01 01:34:31 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:34:31.243506 | orchestrator | 2026-04-01 01:34:31 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:34:31.243550 | orchestrator | 2026-04-01 01:34:31 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:34:34.291180 | orchestrator | 2026-04-01 01:34:34 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:34:34.292377 | orchestrator | 2026-04-01 01:34:34 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:34:34.292432 | orchestrator | 2026-04-01 01:34:34 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:34:37.336773 | orchestrator | 2026-04-01 01:34:37 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:34:37.339071 | orchestrator | 2026-04-01 01:34:37 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:34:37.339251 | orchestrator | 2026-04-01 01:34:37 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:34:40.394308 | orchestrator | 2026-04-01 01:34:40 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:34:40.396159 | orchestrator | 2026-04-01 01:34:40 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:34:40.396223 | orchestrator | 2026-04-01 01:34:40 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:34:43.443401 | orchestrator | 2026-04-01 01:34:43 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:34:43.445217 | orchestrator | 2026-04-01 01:34:43 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:34:43.445257 | orchestrator | 2026-04-01 01:34:43 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:34:46.491266 | orchestrator | 2026-04-01 01:34:46 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:34:46.493701 | orchestrator | 2026-04-01 01:34:46 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:34:46.493835 | orchestrator | 2026-04-01 01:34:46 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:34:49.535821 | orchestrator | 2026-04-01 01:34:49 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:34:49.537730 | orchestrator | 2026-04-01 01:34:49 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:34:49.537780 | orchestrator | 2026-04-01 01:34:49 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:34:52.579911 | orchestrator | 2026-04-01 01:34:52 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:34:52.581401 | orchestrator | 2026-04-01 01:34:52 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:34:52.581527 | orchestrator | 2026-04-01 01:34:52 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:34:55.627677 | orchestrator | 2026-04-01 01:34:55 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:34:55.631582 | orchestrator | 2026-04-01 01:34:55 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:34:55.631632 | orchestrator | 2026-04-01 01:34:55 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:34:58.682808 | orchestrator | 2026-04-01 01:34:58 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:34:58.685355 | orchestrator | 2026-04-01 01:34:58 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:34:58.685428 | orchestrator | 2026-04-01 01:34:58 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:35:01.735822 | orchestrator | 2026-04-01 01:35:01 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:35:01.737079 | orchestrator | 2026-04-01 01:35:01 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:35:01.737167 | orchestrator | 2026-04-01 01:35:01 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:35:04.783997 | orchestrator | 2026-04-01 01:35:04 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:35:04.785516 | orchestrator | 2026-04-01 01:35:04 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:35:04.785702 | orchestrator | 2026-04-01 01:35:04 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:35:07.832684 | orchestrator | 2026-04-01 01:35:07 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:35:07.835494 | orchestrator | 2026-04-01 01:35:07 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:35:07.835574 | orchestrator | 2026-04-01 01:35:07 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:35:10.880364 | orchestrator | 2026-04-01 01:35:10 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:35:10.881905 | orchestrator | 2026-04-01 01:35:10 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:35:10.881958 | orchestrator | 2026-04-01 01:35:10 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:35:13.928003 | orchestrator | 2026-04-01 01:35:13 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:35:13.930849 | orchestrator | 2026-04-01 01:35:13 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:35:13.930939 | orchestrator | 2026-04-01 01:35:13 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:35:16.974977 | orchestrator | 2026-04-01 01:35:16 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:35:16.976266 | orchestrator | 2026-04-01 01:35:16 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:35:16.976455 | orchestrator | 2026-04-01 01:35:16 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:35:20.022860 | orchestrator | 2026-04-01 01:35:20 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:35:20.024734 | orchestrator | 2026-04-01 01:35:20 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:35:20.024805 | orchestrator | 2026-04-01 01:35:20 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:35:23.071834 | orchestrator | 2026-04-01 01:35:23 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:35:23.072836 | orchestrator | 2026-04-01 01:35:23 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:35:23.072893 | orchestrator | 2026-04-01 01:35:23 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:35:26.118833 | orchestrator | 2026-04-01 01:35:26 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:35:26.120075 | orchestrator | 2026-04-01 01:35:26 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:35:26.120134 | orchestrator | 2026-04-01 01:35:26 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:35:29.165385 | orchestrator | 2026-04-01 01:35:29 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:35:29.166775 | orchestrator | 2026-04-01 01:35:29 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:35:29.166857 | orchestrator | 2026-04-01 01:35:29 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:35:32.214342 | orchestrator | 2026-04-01 01:35:32 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:35:32.216023 | orchestrator | 2026-04-01 01:35:32 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:35:32.216088 | orchestrator | 2026-04-01 01:35:32 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:35:35.264710 | orchestrator | 2026-04-01 01:35:35 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:35:35.266510 | orchestrator | 2026-04-01 01:35:35 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:35:35.266557 | orchestrator | 2026-04-01 01:35:35 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:35:38.311285 | orchestrator | 2026-04-01 01:35:38 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:35:38.311468 | orchestrator | 2026-04-01 01:35:38 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:35:38.311578 | orchestrator | 2026-04-01 01:35:38 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:35:41.356634 | orchestrator | 2026-04-01 01:35:41 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:35:41.356821 | orchestrator | 2026-04-01 01:35:41 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:35:41.356838 | orchestrator | 2026-04-01 01:35:41 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:35:44.404045 | orchestrator | 2026-04-01 01:35:44 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:35:44.406243 | orchestrator | 2026-04-01 01:35:44 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:35:44.406662 | orchestrator | 2026-04-01 01:35:44 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:35:47.453985 | orchestrator | 2026-04-01 01:35:47 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:35:47.456333 | orchestrator | 2026-04-01 01:35:47 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:35:47.456381 | orchestrator | 2026-04-01 01:35:47 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:35:50.508752 | orchestrator | 2026-04-01 01:35:50 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:35:50.509228 | orchestrator | 2026-04-01 01:35:50 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:35:50.509295 | orchestrator | 2026-04-01 01:35:50 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:35:53.558974 | orchestrator | 2026-04-01 01:35:53 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:35:53.559563 | orchestrator | 2026-04-01 01:35:53 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:35:53.559705 | orchestrator | 2026-04-01 01:35:53 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:35:56.600152 | orchestrator | 2026-04-01 01:35:56 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:35:56.601654 | orchestrator | 2026-04-01 01:35:56 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:35:56.601776 | orchestrator | 2026-04-01 01:35:56 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:35:59.643523 | orchestrator | 2026-04-01 01:35:59 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:35:59.644514 | orchestrator | 2026-04-01 01:35:59 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:35:59.644896 | orchestrator | 2026-04-01 01:35:59 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:36:02.693318 | orchestrator | 2026-04-01 01:36:02 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:36:02.695067 | orchestrator | 2026-04-01 01:36:02 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:36:02.695269 | orchestrator | 2026-04-01 01:36:02 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:36:05.743537 | orchestrator | 2026-04-01 01:36:05 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:36:05.744727 | orchestrator | 2026-04-01 01:36:05 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:36:05.744754 | orchestrator | 2026-04-01 01:36:05 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:36:08.798503 | orchestrator | 2026-04-01 01:36:08 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:36:08.800416 | orchestrator | 2026-04-01 01:36:08 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:36:08.800674 | orchestrator | 2026-04-01 01:36:08 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:36:11.852917 | orchestrator | 2026-04-01 01:36:11 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:36:11.854152 | orchestrator | 2026-04-01 01:36:11 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:36:11.854188 | orchestrator | 2026-04-01 01:36:11 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:36:14.894108 | orchestrator | 2026-04-01 01:36:14 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:36:14.963507 | orchestrator | 2026-04-01 01:36:14 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:36:14.963577 | orchestrator | 2026-04-01 01:36:14 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:36:17.946167 | orchestrator | 2026-04-01 01:36:17 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:36:17.950643 | orchestrator | 2026-04-01 01:36:17 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:36:17.950816 | orchestrator | 2026-04-01 01:36:17 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:36:20.999320 | orchestrator | 2026-04-01 01:36:20 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:36:21.000886 | orchestrator | 2026-04-01 01:36:21 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:36:21.002222 | orchestrator | 2026-04-01 01:36:21 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:36:24.048095 | orchestrator | 2026-04-01 01:36:24 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:36:24.049774 | orchestrator | 2026-04-01 01:36:24 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:36:24.049835 | orchestrator | 2026-04-01 01:36:24 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:36:27.090611 | orchestrator | 2026-04-01 01:36:27 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:36:27.092513 | orchestrator | 2026-04-01 01:36:27 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:36:27.092625 | orchestrator | 2026-04-01 01:36:27 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:36:30.143581 | orchestrator | 2026-04-01 01:36:30 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:36:30.145861 | orchestrator | 2026-04-01 01:36:30 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:36:30.146106 | orchestrator | 2026-04-01 01:36:30 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:36:33.188493 | orchestrator | 2026-04-01 01:36:33 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:36:33.190622 | orchestrator | 2026-04-01 01:36:33 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:36:33.190693 | orchestrator | 2026-04-01 01:36:33 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:36:36.240139 | orchestrator | 2026-04-01 01:36:36 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:36:36.241607 | orchestrator | 2026-04-01 01:36:36 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:36:36.241686 | orchestrator | 2026-04-01 01:36:36 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:36:39.287475 | orchestrator | 2026-04-01 01:36:39 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:36:39.288178 | orchestrator | 2026-04-01 01:36:39 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:36:39.288233 | orchestrator | 2026-04-01 01:36:39 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:36:42.335258 | orchestrator | 2026-04-01 01:36:42 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:36:42.337542 | orchestrator | 2026-04-01 01:36:42 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:36:42.337603 | orchestrator | 2026-04-01 01:36:42 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:36:45.381053 | orchestrator | 2026-04-01 01:36:45 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:36:45.383276 | orchestrator | 2026-04-01 01:36:45 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:36:45.383323 | orchestrator | 2026-04-01 01:36:45 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:36:48.435073 | orchestrator | 2026-04-01 01:36:48 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:36:48.435604 | orchestrator | 2026-04-01 01:36:48 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:36:48.435814 | orchestrator | 2026-04-01 01:36:48 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:36:51.484108 | orchestrator | 2026-04-01 01:36:51 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:36:51.486337 | orchestrator | 2026-04-01 01:36:51 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:36:51.486476 | orchestrator | 2026-04-01 01:36:51 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:36:54.530413 | orchestrator | 2026-04-01 01:36:54 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:36:54.532761 | orchestrator | 2026-04-01 01:36:54 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:36:54.532836 | orchestrator | 2026-04-01 01:36:54 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:36:57.578600 | orchestrator | 2026-04-01 01:36:57 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:36:57.580205 | orchestrator | 2026-04-01 01:36:57 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:36:57.580313 | orchestrator | 2026-04-01 01:36:57 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:37:00.628624 | orchestrator | 2026-04-01 01:37:00 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:37:01.090270 | orchestrator | 2026-04-01 01:37:00 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:37:01.090336 | orchestrator | 2026-04-01 01:37:00 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:37:03.680982 | orchestrator | 2026-04-01 01:37:03 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:37:03.681932 | orchestrator | 2026-04-01 01:37:03 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:37:03.682158 | orchestrator | 2026-04-01 01:37:03 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:37:06.727623 | orchestrator | 2026-04-01 01:37:06 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:37:06.900191 | orchestrator | 2026-04-01 01:37:06 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:37:06.900242 | orchestrator | 2026-04-01 01:37:06 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:37:09.775978 | orchestrator | 2026-04-01 01:37:09 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:37:09.777487 | orchestrator | 2026-04-01 01:37:09 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:37:09.777710 | orchestrator | 2026-04-01 01:37:09 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:37:12.822649 | orchestrator | 2026-04-01 01:37:12 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:37:12.822720 | orchestrator | 2026-04-01 01:37:12 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:37:12.822727 | orchestrator | 2026-04-01 01:37:12 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:37:15.872758 | orchestrator | 2026-04-01 01:37:15 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:37:15.874795 | orchestrator | 2026-04-01 01:37:15 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:37:15.874811 | orchestrator | 2026-04-01 01:37:15 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:37:18.923017 | orchestrator | 2026-04-01 01:37:18 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:37:18.924348 | orchestrator | 2026-04-01 01:37:18 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:37:18.924458 | orchestrator | 2026-04-01 01:37:18 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:37:21.970723 | orchestrator | 2026-04-01 01:37:21 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:37:21.971462 | orchestrator | 2026-04-01 01:37:21 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:37:21.972450 | orchestrator | 2026-04-01 01:37:21 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:37:25.015241 | orchestrator | 2026-04-01 01:37:25 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:37:25.016855 | orchestrator | 2026-04-01 01:37:25 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:37:25.016885 | orchestrator | 2026-04-01 01:37:25 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:37:28.063908 | orchestrator | 2026-04-01 01:37:28 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:37:28.065270 | orchestrator | 2026-04-01 01:37:28 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:37:28.065288 | orchestrator | 2026-04-01 01:37:28 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:37:31.108395 | orchestrator | 2026-04-01 01:37:31 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:37:31.109417 | orchestrator | 2026-04-01 01:37:31 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:37:31.109466 | orchestrator | 2026-04-01 01:37:31 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:37:34.150965 | orchestrator | 2026-04-01 01:37:34 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:37:34.153277 | orchestrator | 2026-04-01 01:37:34 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:37:34.153356 | orchestrator | 2026-04-01 01:37:34 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:37:37.196352 | orchestrator | 2026-04-01 01:37:37 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:37:37.198111 | orchestrator | 2026-04-01 01:37:37 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:37:37.198140 | orchestrator | 2026-04-01 01:37:37 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:37:40.249213 | orchestrator | 2026-04-01 01:37:40 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:37:40.249323 | orchestrator | 2026-04-01 01:37:40 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:37:40.249375 | orchestrator | 2026-04-01 01:37:40 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:37:43.287940 | orchestrator | 2026-04-01 01:37:43 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:37:43.289608 | orchestrator | 2026-04-01 01:37:43 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:37:43.289695 | orchestrator | 2026-04-01 01:37:43 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:37:46.334904 | orchestrator | 2026-04-01 01:37:46 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:37:46.335872 | orchestrator | 2026-04-01 01:37:46 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:37:46.335916 | orchestrator | 2026-04-01 01:37:46 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:37:49.386546 | orchestrator | 2026-04-01 01:37:49 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:37:49.386648 | orchestrator | 2026-04-01 01:37:49 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:37:49.386665 | orchestrator | 2026-04-01 01:37:49 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:37:52.434249 | orchestrator | 2026-04-01 01:37:52 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:37:52.436089 | orchestrator | 2026-04-01 01:37:52 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:37:52.436174 | orchestrator | 2026-04-01 01:37:52 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:37:55.494892 | orchestrator | 2026-04-01 01:37:55 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:37:55.495053 | orchestrator | 2026-04-01 01:37:55 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:37:55.495072 | orchestrator | 2026-04-01 01:37:55 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:37:58.539906 | orchestrator | 2026-04-01 01:37:58 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:37:58.540093 | orchestrator | 2026-04-01 01:37:58 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:37:58.540147 | orchestrator | 2026-04-01 01:37:58 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:38:01.585451 | orchestrator | 2026-04-01 01:38:01 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:38:01.588669 | orchestrator | 2026-04-01 01:38:01 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:38:01.588742 | orchestrator | 2026-04-01 01:38:01 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:38:04.633013 | orchestrator | 2026-04-01 01:38:04 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:38:04.633347 | orchestrator | 2026-04-01 01:38:04 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:38:04.633382 | orchestrator | 2026-04-01 01:38:04 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:38:07.678561 | orchestrator | 2026-04-01 01:38:07 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:38:07.681124 | orchestrator | 2026-04-01 01:38:07 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:38:07.681166 | orchestrator | 2026-04-01 01:38:07 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:38:10.719973 | orchestrator | 2026-04-01 01:38:10 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:38:10.722224 | orchestrator | 2026-04-01 01:38:10 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:38:10.722312 | orchestrator | 2026-04-01 01:38:10 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:38:13.762640 | orchestrator | 2026-04-01 01:38:13 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:38:13.764818 | orchestrator | 2026-04-01 01:38:13 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:38:13.764952 | orchestrator | 2026-04-01 01:38:13 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:38:16.812865 | orchestrator | 2026-04-01 01:38:16 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:38:16.815383 | orchestrator | 2026-04-01 01:38:16 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:38:16.815840 | orchestrator | 2026-04-01 01:38:16 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:38:19.857917 | orchestrator | 2026-04-01 01:38:19 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:38:19.859974 | orchestrator | 2026-04-01 01:38:19 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:38:19.860118 | orchestrator | 2026-04-01 01:38:19 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:38:22.908809 | orchestrator | 2026-04-01 01:38:22 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:38:22.910701 | orchestrator | 2026-04-01 01:38:22 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:38:22.910812 | orchestrator | 2026-04-01 01:38:22 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:38:25.957502 | orchestrator | 2026-04-01 01:38:25 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:38:25.959475 | orchestrator | 2026-04-01 01:38:25 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:38:25.959594 | orchestrator | 2026-04-01 01:38:25 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:38:29.003513 | orchestrator | 2026-04-01 01:38:29 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:38:29.005188 | orchestrator | 2026-04-01 01:38:29 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:38:29.005407 | orchestrator | 2026-04-01 01:38:29 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:38:32.066571 | orchestrator | 2026-04-01 01:38:32 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:38:32.069383 | orchestrator | 2026-04-01 01:38:32 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:38:32.069433 | orchestrator | 2026-04-01 01:38:32 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:38:35.118216 | orchestrator | 2026-04-01 01:38:35 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:38:35.121071 | orchestrator | 2026-04-01 01:38:35 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:38:35.121216 | orchestrator | 2026-04-01 01:38:35 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:38:38.166963 | orchestrator | 2026-04-01 01:38:38 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:38:38.167934 | orchestrator | 2026-04-01 01:38:38 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:38:38.167980 | orchestrator | 2026-04-01 01:38:38 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:38:41.223577 | orchestrator | 2026-04-01 01:38:41 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:38:41.225810 | orchestrator | 2026-04-01 01:38:41 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:38:41.226122 | orchestrator | 2026-04-01 01:38:41 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:38:44.284235 | orchestrator | 2026-04-01 01:38:44 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:38:44.285301 | orchestrator | 2026-04-01 01:38:44 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:38:44.286264 | orchestrator | 2026-04-01 01:38:44 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:38:47.353941 | orchestrator | 2026-04-01 01:38:47 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:38:47.354436 | orchestrator | 2026-04-01 01:38:47 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:38:47.354474 | orchestrator | 2026-04-01 01:38:47 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:38:50.396466 | orchestrator | 2026-04-01 01:38:50 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:38:50.399945 | orchestrator | 2026-04-01 01:38:50 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:38:50.400029 | orchestrator | 2026-04-01 01:38:50 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:38:53.449296 | orchestrator | 2026-04-01 01:38:53 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:38:53.452088 | orchestrator | 2026-04-01 01:38:53 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:38:53.452139 | orchestrator | 2026-04-01 01:38:53 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:38:56.501263 | orchestrator | 2026-04-01 01:38:56 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:38:56.504634 | orchestrator | 2026-04-01 01:38:56 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:38:56.504838 | orchestrator | 2026-04-01 01:38:56 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:38:59.555996 | orchestrator | 2026-04-01 01:38:59 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:38:59.557176 | orchestrator | 2026-04-01 01:38:59 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:38:59.557209 | orchestrator | 2026-04-01 01:38:59 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:39:02.602897 | orchestrator | 2026-04-01 01:39:02 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:39:02.605319 | orchestrator | 2026-04-01 01:39:02 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:39:02.605393 | orchestrator | 2026-04-01 01:39:02 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:39:05.652129 | orchestrator | 2026-04-01 01:39:05 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:39:05.654265 | orchestrator | 2026-04-01 01:39:05 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:39:05.654327 | orchestrator | 2026-04-01 01:39:05 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:39:08.706627 | orchestrator | 2026-04-01 01:39:08 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:39:08.708595 | orchestrator | 2026-04-01 01:39:08 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:39:08.708659 | orchestrator | 2026-04-01 01:39:08 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:39:11.757283 | orchestrator | 2026-04-01 01:39:11 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:39:11.758130 | orchestrator | 2026-04-01 01:39:11 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:39:11.758194 | orchestrator | 2026-04-01 01:39:11 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:39:14.813693 | orchestrator | 2026-04-01 01:39:14 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:39:14.814667 | orchestrator | 2026-04-01 01:39:14 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:39:14.814730 | orchestrator | 2026-04-01 01:39:14 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:39:17.872116 | orchestrator | 2026-04-01 01:39:17 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:39:17.874391 | orchestrator | 2026-04-01 01:39:17 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:39:17.874464 | orchestrator | 2026-04-01 01:39:17 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:39:20.915810 | orchestrator | 2026-04-01 01:39:20 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:39:20.917783 | orchestrator | 2026-04-01 01:39:20 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:39:20.917832 | orchestrator | 2026-04-01 01:39:20 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:39:23.966793 | orchestrator | 2026-04-01 01:39:23 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:39:23.969802 | orchestrator | 2026-04-01 01:39:23 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:39:23.969874 | orchestrator | 2026-04-01 01:39:23 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:39:27.037886 | orchestrator | 2026-04-01 01:39:27 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:39:27.039796 | orchestrator | 2026-04-01 01:39:27 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:39:27.039859 | orchestrator | 2026-04-01 01:39:27 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:39:30.096436 | orchestrator | 2026-04-01 01:39:30 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:39:30.097833 | orchestrator | 2026-04-01 01:39:30 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:39:30.097869 | orchestrator | 2026-04-01 01:39:30 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:39:33.143718 | orchestrator | 2026-04-01 01:39:33 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:39:33.146413 | orchestrator | 2026-04-01 01:39:33 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:39:33.146446 | orchestrator | 2026-04-01 01:39:33 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:39:36.199822 | orchestrator | 2026-04-01 01:39:36 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:39:36.201530 | orchestrator | 2026-04-01 01:39:36 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:39:36.201618 | orchestrator | 2026-04-01 01:39:36 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:39:39.249249 | orchestrator | 2026-04-01 01:39:39 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:39:39.250168 | orchestrator | 2026-04-01 01:39:39 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:39:39.250224 | orchestrator | 2026-04-01 01:39:39 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:39:42.294256 | orchestrator | 2026-04-01 01:39:42 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:39:42.296859 | orchestrator | 2026-04-01 01:39:42 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:39:42.296931 | orchestrator | 2026-04-01 01:39:42 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:39:45.349505 | orchestrator | 2026-04-01 01:39:45 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:39:45.353410 | orchestrator | 2026-04-01 01:39:45 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:39:45.353461 | orchestrator | 2026-04-01 01:39:45 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:39:48.400491 | orchestrator | 2026-04-01 01:39:48 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:39:48.401175 | orchestrator | 2026-04-01 01:39:48 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:39:48.401208 | orchestrator | 2026-04-01 01:39:48 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:39:51.448308 | orchestrator | 2026-04-01 01:39:51 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:39:51.450631 | orchestrator | 2026-04-01 01:39:51 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:39:51.450678 | orchestrator | 2026-04-01 01:39:51 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:39:54.488985 | orchestrator | 2026-04-01 01:39:54 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:39:54.491114 | orchestrator | 2026-04-01 01:39:54 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:39:54.491280 | orchestrator | 2026-04-01 01:39:54 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:39:57.538869 | orchestrator | 2026-04-01 01:39:57 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:39:57.541773 | orchestrator | 2026-04-01 01:39:57 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:39:57.541944 | orchestrator | 2026-04-01 01:39:57 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:40:00.584322 | orchestrator | 2026-04-01 01:40:00 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:40:00.586375 | orchestrator | 2026-04-01 01:40:00 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:40:00.586454 | orchestrator | 2026-04-01 01:40:00 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:40:03.631114 | orchestrator | 2026-04-01 01:40:03 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:40:03.632680 | orchestrator | 2026-04-01 01:40:03 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:40:03.632727 | orchestrator | 2026-04-01 01:40:03 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:40:06.681617 | orchestrator | 2026-04-01 01:40:06 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:40:06.682986 | orchestrator | 2026-04-01 01:40:06 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:40:06.683035 | orchestrator | 2026-04-01 01:40:06 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:40:09.729735 | orchestrator | 2026-04-01 01:40:09 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:40:09.730713 | orchestrator | 2026-04-01 01:40:09 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:40:09.730840 | orchestrator | 2026-04-01 01:40:09 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:40:12.777907 | orchestrator | 2026-04-01 01:40:12 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:40:12.779266 | orchestrator | 2026-04-01 01:40:12 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:40:12.779351 | orchestrator | 2026-04-01 01:40:12 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:40:15.828091 | orchestrator | 2026-04-01 01:40:15 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:40:15.830264 | orchestrator | 2026-04-01 01:40:15 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:40:15.830357 | orchestrator | 2026-04-01 01:40:15 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:40:18.878467 | orchestrator | 2026-04-01 01:40:18 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:40:18.879229 | orchestrator | 2026-04-01 01:40:18 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:40:18.879292 | orchestrator | 2026-04-01 01:40:18 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:40:21.929905 | orchestrator | 2026-04-01 01:40:21 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:40:21.931413 | orchestrator | 2026-04-01 01:40:21 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:40:21.931486 | orchestrator | 2026-04-01 01:40:21 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:40:24.976211 | orchestrator | 2026-04-01 01:40:24 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:40:24.978354 | orchestrator | 2026-04-01 01:40:24 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:40:24.978398 | orchestrator | 2026-04-01 01:40:24 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:40:28.023006 | orchestrator | 2026-04-01 01:40:28 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:40:28.024254 | orchestrator | 2026-04-01 01:40:28 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:40:28.024342 | orchestrator | 2026-04-01 01:40:28 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:40:31.068024 | orchestrator | 2026-04-01 01:40:31 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:40:31.069612 | orchestrator | 2026-04-01 01:40:31 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:40:31.069684 | orchestrator | 2026-04-01 01:40:31 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:40:34.118175 | orchestrator | 2026-04-01 01:40:34 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:40:34.119807 | orchestrator | 2026-04-01 01:40:34 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:40:34.119855 | orchestrator | 2026-04-01 01:40:34 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:40:37.166584 | orchestrator | 2026-04-01 01:40:37 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:40:37.169399 | orchestrator | 2026-04-01 01:40:37 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:40:37.169535 | orchestrator | 2026-04-01 01:40:37 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:40:40.219190 | orchestrator | 2026-04-01 01:40:40 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:40:40.221116 | orchestrator | 2026-04-01 01:40:40 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:40:40.221541 | orchestrator | 2026-04-01 01:40:40 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:40:43.282299 | orchestrator | 2026-04-01 01:40:43 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:40:43.284413 | orchestrator | 2026-04-01 01:40:43 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:40:43.284468 | orchestrator | 2026-04-01 01:40:43 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:40:46.342523 | orchestrator | 2026-04-01 01:40:46 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:40:46.344155 | orchestrator | 2026-04-01 01:40:46 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:40:46.344915 | orchestrator | 2026-04-01 01:40:46 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:40:49.395291 | orchestrator | 2026-04-01 01:40:49 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:40:49.397538 | orchestrator | 2026-04-01 01:40:49 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:40:49.397608 | orchestrator | 2026-04-01 01:40:49 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:40:52.460191 | orchestrator | 2026-04-01 01:40:52 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:40:52.462640 | orchestrator | 2026-04-01 01:40:52 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:40:52.462712 | orchestrator | 2026-04-01 01:40:52 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:40:55.517560 | orchestrator | 2026-04-01 01:40:55 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:40:55.519339 | orchestrator | 2026-04-01 01:40:55 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:40:55.519434 | orchestrator | 2026-04-01 01:40:55 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:40:58.571593 | orchestrator | 2026-04-01 01:40:58 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:40:58.573880 | orchestrator | 2026-04-01 01:40:58 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:40:58.574418 | orchestrator | 2026-04-01 01:40:58 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:41:01.625081 | orchestrator | 2026-04-01 01:41:01 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:41:01.627260 | orchestrator | 2026-04-01 01:41:01 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:41:01.627296 | orchestrator | 2026-04-01 01:41:01 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:41:04.673776 | orchestrator | 2026-04-01 01:41:04 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:41:04.675646 | orchestrator | 2026-04-01 01:41:04 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:41:04.675697 | orchestrator | 2026-04-01 01:41:04 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:41:07.722204 | orchestrator | 2026-04-01 01:41:07 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:41:07.724326 | orchestrator | 2026-04-01 01:41:07 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:41:07.724404 | orchestrator | 2026-04-01 01:41:07 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:41:10.774907 | orchestrator | 2026-04-01 01:41:10 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:41:10.776751 | orchestrator | 2026-04-01 01:41:10 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:41:10.776807 | orchestrator | 2026-04-01 01:41:10 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:41:13.822308 | orchestrator | 2026-04-01 01:41:13 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:41:13.826164 | orchestrator | 2026-04-01 01:41:13 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:41:13.826238 | orchestrator | 2026-04-01 01:41:13 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:41:16.875145 | orchestrator | 2026-04-01 01:41:16 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:41:16.875605 | orchestrator | 2026-04-01 01:41:16 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:41:16.875650 | orchestrator | 2026-04-01 01:41:16 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:41:19.922731 | orchestrator | 2026-04-01 01:41:19 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:41:19.924690 | orchestrator | 2026-04-01 01:41:19 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:41:19.924807 | orchestrator | 2026-04-01 01:41:19 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:41:22.971798 | orchestrator | 2026-04-01 01:41:22 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:41:22.972984 | orchestrator | 2026-04-01 01:41:22 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:41:22.973036 | orchestrator | 2026-04-01 01:41:22 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:41:26.018050 | orchestrator | 2026-04-01 01:41:26 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:41:26.018971 | orchestrator | 2026-04-01 01:41:26 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:41:26.019363 | orchestrator | 2026-04-01 01:41:26 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:41:29.070083 | orchestrator | 2026-04-01 01:41:29 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:41:29.071974 | orchestrator | 2026-04-01 01:41:29 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:41:29.072035 | orchestrator | 2026-04-01 01:41:29 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:41:32.117128 | orchestrator | 2026-04-01 01:41:32 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:41:32.119336 | orchestrator | 2026-04-01 01:41:32 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:41:32.119393 | orchestrator | 2026-04-01 01:41:32 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:41:35.167705 | orchestrator | 2026-04-01 01:41:35 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:41:35.169414 | orchestrator | 2026-04-01 01:41:35 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:41:35.169461 | orchestrator | 2026-04-01 01:41:35 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:41:38.208117 | orchestrator | 2026-04-01 01:41:38 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:41:38.210753 | orchestrator | 2026-04-01 01:41:38 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:41:38.210822 | orchestrator | 2026-04-01 01:41:38 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:41:41.253660 | orchestrator | 2026-04-01 01:41:41 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:41:41.254992 | orchestrator | 2026-04-01 01:41:41 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:41:41.255042 | orchestrator | 2026-04-01 01:41:41 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:41:44.302248 | orchestrator | 2026-04-01 01:41:44 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:41:44.304835 | orchestrator | 2026-04-01 01:41:44 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:41:44.308029 | orchestrator | 2026-04-01 01:41:44 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:41:47.355199 | orchestrator | 2026-04-01 01:41:47 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:41:47.359321 | orchestrator | 2026-04-01 01:41:47 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:41:47.359479 | orchestrator | 2026-04-01 01:41:47 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:41:50.407428 | orchestrator | 2026-04-01 01:41:50 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:41:50.412016 | orchestrator | 2026-04-01 01:41:50 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:41:50.412095 | orchestrator | 2026-04-01 01:41:50 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:41:53.459522 | orchestrator | 2026-04-01 01:41:53 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:41:53.462303 | orchestrator | 2026-04-01 01:41:53 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:41:53.462357 | orchestrator | 2026-04-01 01:41:53 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:41:56.516220 | orchestrator | 2026-04-01 01:41:56 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:41:56.519428 | orchestrator | 2026-04-01 01:41:56 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:41:56.519518 | orchestrator | 2026-04-01 01:41:56 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:41:59.566299 | orchestrator | 2026-04-01 01:41:59 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:41:59.568837 | orchestrator | 2026-04-01 01:41:59 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:41:59.568903 | orchestrator | 2026-04-01 01:41:59 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:42:02.618518 | orchestrator | 2026-04-01 01:42:02 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:42:02.620515 | orchestrator | 2026-04-01 01:42:02 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:42:02.620567 | orchestrator | 2026-04-01 01:42:02 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:42:05.665799 | orchestrator | 2026-04-01 01:42:05 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:42:05.666303 | orchestrator | 2026-04-01 01:42:05 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:42:05.666330 | orchestrator | 2026-04-01 01:42:05 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:42:08.713438 | orchestrator | 2026-04-01 01:42:08 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:42:08.715079 | orchestrator | 2026-04-01 01:42:08 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:42:08.715133 | orchestrator | 2026-04-01 01:42:08 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:42:11.765549 | orchestrator | 2026-04-01 01:42:11 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:42:11.768529 | orchestrator | 2026-04-01 01:42:11 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:42:11.768687 | orchestrator | 2026-04-01 01:42:11 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:42:14.814961 | orchestrator | 2026-04-01 01:42:14 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:42:14.816438 | orchestrator | 2026-04-01 01:42:14 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:42:14.816513 | orchestrator | 2026-04-01 01:42:14 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:42:17.864474 | orchestrator | 2026-04-01 01:42:17 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:42:17.866482 | orchestrator | 2026-04-01 01:42:17 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:42:17.866564 | orchestrator | 2026-04-01 01:42:17 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:42:20.915239 | orchestrator | 2026-04-01 01:42:20 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:42:20.916794 | orchestrator | 2026-04-01 01:42:20 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:42:20.916906 | orchestrator | 2026-04-01 01:42:20 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:42:23.963609 | orchestrator | 2026-04-01 01:42:23 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:42:23.965798 | orchestrator | 2026-04-01 01:42:23 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:42:23.965898 | orchestrator | 2026-04-01 01:42:23 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:42:27.021368 | orchestrator | 2026-04-01 01:42:27 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:42:27.023100 | orchestrator | 2026-04-01 01:42:27 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:42:27.023162 | orchestrator | 2026-04-01 01:42:27 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:42:30.067736 | orchestrator | 2026-04-01 01:42:30 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:42:30.069882 | orchestrator | 2026-04-01 01:42:30 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:42:30.069922 | orchestrator | 2026-04-01 01:42:30 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:42:33.111759 | orchestrator | 2026-04-01 01:42:33 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:42:33.113275 | orchestrator | 2026-04-01 01:42:33 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:42:33.113392 | orchestrator | 2026-04-01 01:42:33 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:42:36.158209 | orchestrator | 2026-04-01 01:42:36 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:42:36.159898 | orchestrator | 2026-04-01 01:42:36 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:42:36.159989 | orchestrator | 2026-04-01 01:42:36 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:42:39.204544 | orchestrator | 2026-04-01 01:42:39 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:42:39.206544 | orchestrator | 2026-04-01 01:42:39 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:42:39.206614 | orchestrator | 2026-04-01 01:42:39 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:42:42.256263 | orchestrator | 2026-04-01 01:42:42 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:42:42.257377 | orchestrator | 2026-04-01 01:42:42 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:42:42.257757 | orchestrator | 2026-04-01 01:42:42 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:42:45.307780 | orchestrator | 2026-04-01 01:42:45 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:42:45.308261 | orchestrator | 2026-04-01 01:42:45 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:42:45.308340 | orchestrator | 2026-04-01 01:42:45 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:42:48.349488 | orchestrator | 2026-04-01 01:42:48 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:42:48.351737 | orchestrator | 2026-04-01 01:42:48 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:42:48.351803 | orchestrator | 2026-04-01 01:42:48 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:42:51.396983 | orchestrator | 2026-04-01 01:42:51 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:42:51.398634 | orchestrator | 2026-04-01 01:42:51 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:42:51.398707 | orchestrator | 2026-04-01 01:42:51 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:42:54.438823 | orchestrator | 2026-04-01 01:42:54 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:42:54.440325 | orchestrator | 2026-04-01 01:42:54 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:42:54.440394 | orchestrator | 2026-04-01 01:42:54 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:42:57.481231 | orchestrator | 2026-04-01 01:42:57 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:42:57.483377 | orchestrator | 2026-04-01 01:42:57 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:42:57.483446 | orchestrator | 2026-04-01 01:42:57 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:43:00.526405 | orchestrator | 2026-04-01 01:43:00 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:43:00.529860 | orchestrator | 2026-04-01 01:43:00 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:43:00.530225 | orchestrator | 2026-04-01 01:43:00 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:43:03.574424 | orchestrator | 2026-04-01 01:43:03 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:43:03.576029 | orchestrator | 2026-04-01 01:43:03 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:43:03.576206 | orchestrator | 2026-04-01 01:43:03 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:43:06.625737 | orchestrator | 2026-04-01 01:43:06 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:43:06.627990 | orchestrator | 2026-04-01 01:43:06 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:43:06.628108 | orchestrator | 2026-04-01 01:43:06 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:43:09.675204 | orchestrator | 2026-04-01 01:43:09 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:43:09.676904 | orchestrator | 2026-04-01 01:43:09 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:43:09.676950 | orchestrator | 2026-04-01 01:43:09 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:43:12.722777 | orchestrator | 2026-04-01 01:43:12 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:43:12.723550 | orchestrator | 2026-04-01 01:43:12 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:43:12.723765 | orchestrator | 2026-04-01 01:43:12 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:43:15.771026 | orchestrator | 2026-04-01 01:43:15 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:43:15.772344 | orchestrator | 2026-04-01 01:43:15 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:43:15.772591 | orchestrator | 2026-04-01 01:43:15 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:43:18.816560 | orchestrator | 2026-04-01 01:43:18 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:43:18.817938 | orchestrator | 2026-04-01 01:43:18 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:43:18.818208 | orchestrator | 2026-04-01 01:43:18 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:43:21.864296 | orchestrator | 2026-04-01 01:43:21 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:43:21.865997 | orchestrator | 2026-04-01 01:43:21 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:43:21.866119 | orchestrator | 2026-04-01 01:43:21 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:43:24.905864 | orchestrator | 2026-04-01 01:43:24 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:43:24.908176 | orchestrator | 2026-04-01 01:43:24 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:43:24.908778 | orchestrator | 2026-04-01 01:43:24 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:43:27.952837 | orchestrator | 2026-04-01 01:43:27 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:43:27.954348 | orchestrator | 2026-04-01 01:43:27 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:43:27.954443 | orchestrator | 2026-04-01 01:43:27 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:43:31.001651 | orchestrator | 2026-04-01 01:43:30 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:43:31.003783 | orchestrator | 2026-04-01 01:43:31 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:43:31.003843 | orchestrator | 2026-04-01 01:43:31 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:43:34.051002 | orchestrator | 2026-04-01 01:43:34 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:43:34.052856 | orchestrator | 2026-04-01 01:43:34 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:43:34.052911 | orchestrator | 2026-04-01 01:43:34 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:43:37.101101 | orchestrator | 2026-04-01 01:43:37 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:43:37.102524 | orchestrator | 2026-04-01 01:43:37 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:43:37.102577 | orchestrator | 2026-04-01 01:43:37 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:43:40.150228 | orchestrator | 2026-04-01 01:43:40 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:43:40.152720 | orchestrator | 2026-04-01 01:43:40 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:43:40.152798 | orchestrator | 2026-04-01 01:43:40 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:43:43.197038 | orchestrator | 2026-04-01 01:43:43 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:43:43.199065 | orchestrator | 2026-04-01 01:43:43 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:43:43.199192 | orchestrator | 2026-04-01 01:43:43 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:43:46.236749 | orchestrator | 2026-04-01 01:43:46 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:43:46.238741 | orchestrator | 2026-04-01 01:43:46 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:43:46.238801 | orchestrator | 2026-04-01 01:43:46 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:43:49.280138 | orchestrator | 2026-04-01 01:43:49 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:43:49.282390 | orchestrator | 2026-04-01 01:43:49 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:43:49.282527 | orchestrator | 2026-04-01 01:43:49 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:43:52.325376 | orchestrator | 2026-04-01 01:43:52 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:43:52.327179 | orchestrator | 2026-04-01 01:43:52 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:43:52.327236 | orchestrator | 2026-04-01 01:43:52 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:43:55.371486 | orchestrator | 2026-04-01 01:43:55 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:43:55.374559 | orchestrator | 2026-04-01 01:43:55 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:43:55.374727 | orchestrator | 2026-04-01 01:43:55 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:43:58.429714 | orchestrator | 2026-04-01 01:43:58 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:43:58.431251 | orchestrator | 2026-04-01 01:43:58 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:43:58.431559 | orchestrator | 2026-04-01 01:43:58 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:44:01.483416 | orchestrator | 2026-04-01 01:44:01 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:44:01.485849 | orchestrator | 2026-04-01 01:44:01 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:44:01.485982 | orchestrator | 2026-04-01 01:44:01 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:44:04.537722 | orchestrator | 2026-04-01 01:44:04 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:44:04.538938 | orchestrator | 2026-04-01 01:44:04 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:44:04.538979 | orchestrator | 2026-04-01 01:44:04 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:44:07.593293 | orchestrator | 2026-04-01 01:44:07 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:44:07.594946 | orchestrator | 2026-04-01 01:44:07 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:44:07.595085 | orchestrator | 2026-04-01 01:44:07 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:44:10.644078 | orchestrator | 2026-04-01 01:44:10 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:44:10.648157 | orchestrator | 2026-04-01 01:44:10 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:44:10.648310 | orchestrator | 2026-04-01 01:44:10 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:44:13.700739 | orchestrator | 2026-04-01 01:44:13 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:44:13.704067 | orchestrator | 2026-04-01 01:44:13 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:44:13.704111 | orchestrator | 2026-04-01 01:44:13 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:44:16.751930 | orchestrator | 2026-04-01 01:44:16 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:44:16.754450 | orchestrator | 2026-04-01 01:44:16 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:44:16.754537 | orchestrator | 2026-04-01 01:44:16 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:44:19.803723 | orchestrator | 2026-04-01 01:44:19 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:44:19.806098 | orchestrator | 2026-04-01 01:44:19 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:44:19.806160 | orchestrator | 2026-04-01 01:44:19 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:44:22.852490 | orchestrator | 2026-04-01 01:44:22 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:44:22.853929 | orchestrator | 2026-04-01 01:44:22 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:44:22.854179 | orchestrator | 2026-04-01 01:44:22 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:44:25.901513 | orchestrator | 2026-04-01 01:44:25 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:44:25.902724 | orchestrator | 2026-04-01 01:44:25 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:44:25.902794 | orchestrator | 2026-04-01 01:44:25 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:44:28.953226 | orchestrator | 2026-04-01 01:44:28 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:44:28.955378 | orchestrator | 2026-04-01 01:44:28 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:44:28.955480 | orchestrator | 2026-04-01 01:44:28 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:44:32.006318 | orchestrator | 2026-04-01 01:44:32 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:44:32.009338 | orchestrator | 2026-04-01 01:44:32 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:44:32.009393 | orchestrator | 2026-04-01 01:44:32 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:44:35.056900 | orchestrator | 2026-04-01 01:44:35 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:44:35.058901 | orchestrator | 2026-04-01 01:44:35 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:44:35.058949 | orchestrator | 2026-04-01 01:44:35 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:44:38.105479 | orchestrator | 2026-04-01 01:44:38 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:44:38.107673 | orchestrator | 2026-04-01 01:44:38 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:44:38.107976 | orchestrator | 2026-04-01 01:44:38 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:44:41.155764 | orchestrator | 2026-04-01 01:44:41 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:44:41.157272 | orchestrator | 2026-04-01 01:44:41 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:44:41.157329 | orchestrator | 2026-04-01 01:44:41 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:44:44.199826 | orchestrator | 2026-04-01 01:44:44 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:44:44.201224 | orchestrator | 2026-04-01 01:44:44 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:44:44.201474 | orchestrator | 2026-04-01 01:44:44 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:44:47.252334 | orchestrator | 2026-04-01 01:44:47 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:44:47.253570 | orchestrator | 2026-04-01 01:44:47 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:44:47.253678 | orchestrator | 2026-04-01 01:44:47 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:44:50.298373 | orchestrator | 2026-04-01 01:44:50 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:44:50.299800 | orchestrator | 2026-04-01 01:44:50 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:44:50.299851 | orchestrator | 2026-04-01 01:44:50 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:44:53.345312 | orchestrator | 2026-04-01 01:44:53 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:44:53.346908 | orchestrator | 2026-04-01 01:44:53 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:44:53.346977 | orchestrator | 2026-04-01 01:44:53 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:44:56.394871 | orchestrator | 2026-04-01 01:44:56 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:44:56.395641 | orchestrator | 2026-04-01 01:44:56 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:44:56.395720 | orchestrator | 2026-04-01 01:44:56 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:44:59.444700 | orchestrator | 2026-04-01 01:44:59 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:44:59.445875 | orchestrator | 2026-04-01 01:44:59 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:44:59.446078 | orchestrator | 2026-04-01 01:44:59 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:45:02.496121 | orchestrator | 2026-04-01 01:45:02 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:45:02.498279 | orchestrator | 2026-04-01 01:45:02 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:45:02.498464 | orchestrator | 2026-04-01 01:45:02 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:45:05.543249 | orchestrator | 2026-04-01 01:45:05 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:45:05.545092 | orchestrator | 2026-04-01 01:45:05 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:45:05.545330 | orchestrator | 2026-04-01 01:45:05 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:45:08.587408 | orchestrator | 2026-04-01 01:45:08 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:45:08.589568 | orchestrator | 2026-04-01 01:45:08 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:45:08.589614 | orchestrator | 2026-04-01 01:45:08 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:45:11.634335 | orchestrator | 2026-04-01 01:45:11 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:45:11.636628 | orchestrator | 2026-04-01 01:45:11 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:45:11.636664 | orchestrator | 2026-04-01 01:45:11 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:45:14.687209 | orchestrator | 2026-04-01 01:45:14 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:45:14.688523 | orchestrator | 2026-04-01 01:45:14 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:45:14.688752 | orchestrator | 2026-04-01 01:45:14 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:45:17.740942 | orchestrator | 2026-04-01 01:45:17 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:45:17.742771 | orchestrator | 2026-04-01 01:45:17 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:45:17.742800 | orchestrator | 2026-04-01 01:45:17 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:45:20.790961 | orchestrator | 2026-04-01 01:45:20 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:45:20.794353 | orchestrator | 2026-04-01 01:45:20 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:45:20.794417 | orchestrator | 2026-04-01 01:45:20 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:45:23.843124 | orchestrator | 2026-04-01 01:45:23 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:45:23.889355 | orchestrator | 2026-04-01 01:45:23 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:45:23.889421 | orchestrator | 2026-04-01 01:45:23 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:45:26.893291 | orchestrator | 2026-04-01 01:45:26 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:45:26.895366 | orchestrator | 2026-04-01 01:45:26 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:45:26.895389 | orchestrator | 2026-04-01 01:45:26 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:45:29.947287 | orchestrator | 2026-04-01 01:45:29 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:45:29.948791 | orchestrator | 2026-04-01 01:45:29 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:45:29.948811 | orchestrator | 2026-04-01 01:45:29 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:45:32.998416 | orchestrator | 2026-04-01 01:45:32 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:45:32.999847 | orchestrator | 2026-04-01 01:45:32 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:45:32.999893 | orchestrator | 2026-04-01 01:45:32 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:45:36.050818 | orchestrator | 2026-04-01 01:45:36 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:45:36.051802 | orchestrator | 2026-04-01 01:45:36 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:45:36.052463 | orchestrator | 2026-04-01 01:45:36 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:45:39.099985 | orchestrator | 2026-04-01 01:45:39 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:45:39.102828 | orchestrator | 2026-04-01 01:45:39 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:45:39.102905 | orchestrator | 2026-04-01 01:45:39 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:45:42.147813 | orchestrator | 2026-04-01 01:45:42 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:45:42.149046 | orchestrator | 2026-04-01 01:45:42 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:45:42.149154 | orchestrator | 2026-04-01 01:45:42 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:45:45.185903 | orchestrator | 2026-04-01 01:45:45 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:45:45.187611 | orchestrator | 2026-04-01 01:45:45 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:45:45.187752 | orchestrator | 2026-04-01 01:45:45 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:45:48.234146 | orchestrator | 2026-04-01 01:45:48 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:45:48.236244 | orchestrator | 2026-04-01 01:45:48 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:45:48.236333 | orchestrator | 2026-04-01 01:45:48 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:45:51.281430 | orchestrator | 2026-04-01 01:45:51 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:45:51.284196 | orchestrator | 2026-04-01 01:45:51 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:45:51.284331 | orchestrator | 2026-04-01 01:45:51 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:45:54.327599 | orchestrator | 2026-04-01 01:45:54 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:45:54.329463 | orchestrator | 2026-04-01 01:45:54 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:45:54.329535 | orchestrator | 2026-04-01 01:45:54 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:45:57.376550 | orchestrator | 2026-04-01 01:45:57 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:45:57.377558 | orchestrator | 2026-04-01 01:45:57 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:45:57.377626 | orchestrator | 2026-04-01 01:45:57 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:46:00.425678 | orchestrator | 2026-04-01 01:46:00 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:46:00.428497 | orchestrator | 2026-04-01 01:46:00 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:46:00.428582 | orchestrator | 2026-04-01 01:46:00 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:46:03.478758 | orchestrator | 2026-04-01 01:46:03 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:46:03.480676 | orchestrator | 2026-04-01 01:46:03 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:46:03.480958 | orchestrator | 2026-04-01 01:46:03 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:46:06.526749 | orchestrator | 2026-04-01 01:46:06 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:46:06.528147 | orchestrator | 2026-04-01 01:46:06 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:46:06.528177 | orchestrator | 2026-04-01 01:46:06 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:46:09.574685 | orchestrator | 2026-04-01 01:46:09 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:46:09.576598 | orchestrator | 2026-04-01 01:46:09 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:46:09.576648 | orchestrator | 2026-04-01 01:46:09 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:46:12.624978 | orchestrator | 2026-04-01 01:46:12 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:46:12.626211 | orchestrator | 2026-04-01 01:46:12 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:46:12.626340 | orchestrator | 2026-04-01 01:46:12 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:46:15.669077 | orchestrator | 2026-04-01 01:46:15 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:46:15.670149 | orchestrator | 2026-04-01 01:46:15 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:46:15.670261 | orchestrator | 2026-04-01 01:46:15 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:46:18.714208 | orchestrator | 2026-04-01 01:46:18 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:46:18.716165 | orchestrator | 2026-04-01 01:46:18 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:46:18.716210 | orchestrator | 2026-04-01 01:46:18 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:46:21.763485 | orchestrator | 2026-04-01 01:46:21 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:46:21.765104 | orchestrator | 2026-04-01 01:46:21 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:46:21.765248 | orchestrator | 2026-04-01 01:46:21 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:46:24.818133 | orchestrator | 2026-04-01 01:46:24 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:46:24.820427 | orchestrator | 2026-04-01 01:46:24 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:46:24.820482 | orchestrator | 2026-04-01 01:46:24 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:46:27.872596 | orchestrator | 2026-04-01 01:46:27 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:46:27.874267 | orchestrator | 2026-04-01 01:46:27 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:46:27.874410 | orchestrator | 2026-04-01 01:46:27 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:46:30.923158 | orchestrator | 2026-04-01 01:46:30 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:46:30.924145 | orchestrator | 2026-04-01 01:46:30 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:46:30.924188 | orchestrator | 2026-04-01 01:46:30 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:46:33.977160 | orchestrator | 2026-04-01 01:46:33 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:46:33.978774 | orchestrator | 2026-04-01 01:46:33 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:46:33.978873 | orchestrator | 2026-04-01 01:46:33 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:46:37.031594 | orchestrator | 2026-04-01 01:46:37 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:46:37.032935 | orchestrator | 2026-04-01 01:46:37 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:46:37.033088 | orchestrator | 2026-04-01 01:46:37 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:46:40.080953 | orchestrator | 2026-04-01 01:46:40 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:46:40.085551 | orchestrator | 2026-04-01 01:46:40 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:46:40.085625 | orchestrator | 2026-04-01 01:46:40 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:46:43.136748 | orchestrator | 2026-04-01 01:46:43 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:46:43.141160 | orchestrator | 2026-04-01 01:46:43 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:46:43.141292 | orchestrator | 2026-04-01 01:46:43 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:46:46.186780 | orchestrator | 2026-04-01 01:46:46 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:46:46.188271 | orchestrator | 2026-04-01 01:46:46 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:46:46.188515 | orchestrator | 2026-04-01 01:46:46 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:46:49.241861 | orchestrator | 2026-04-01 01:46:49 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:46:49.246452 | orchestrator | 2026-04-01 01:46:49 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:46:49.246527 | orchestrator | 2026-04-01 01:46:49 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:46:52.296056 | orchestrator | 2026-04-01 01:46:52 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:46:52.298135 | orchestrator | 2026-04-01 01:46:52 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:46:52.298180 | orchestrator | 2026-04-01 01:46:52 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:46:55.345795 | orchestrator | 2026-04-01 01:46:55 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:46:55.347300 | orchestrator | 2026-04-01 01:46:55 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:46:55.347328 | orchestrator | 2026-04-01 01:46:55 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:46:58.397557 | orchestrator | 2026-04-01 01:46:58 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:46:58.398901 | orchestrator | 2026-04-01 01:46:58 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:46:58.398976 | orchestrator | 2026-04-01 01:46:58 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:47:01.440996 | orchestrator | 2026-04-01 01:47:01 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:47:01.442990 | orchestrator | 2026-04-01 01:47:01 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:47:01.443272 | orchestrator | 2026-04-01 01:47:01 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:47:04.491811 | orchestrator | 2026-04-01 01:47:04 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:47:04.493025 | orchestrator | 2026-04-01 01:47:04 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:47:04.493121 | orchestrator | 2026-04-01 01:47:04 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:47:07.539650 | orchestrator | 2026-04-01 01:47:07 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:47:07.541117 | orchestrator | 2026-04-01 01:47:07 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:47:07.541183 | orchestrator | 2026-04-01 01:47:07 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:47:10.590091 | orchestrator | 2026-04-01 01:47:10 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:47:10.591940 | orchestrator | 2026-04-01 01:47:10 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:47:10.591985 | orchestrator | 2026-04-01 01:47:10 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:47:13.639810 | orchestrator | 2026-04-01 01:47:13 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:47:13.641268 | orchestrator | 2026-04-01 01:47:13 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:47:13.641313 | orchestrator | 2026-04-01 01:47:13 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:47:16.683168 | orchestrator | 2026-04-01 01:47:16 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:47:16.683830 | orchestrator | 2026-04-01 01:47:16 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:47:16.683865 | orchestrator | 2026-04-01 01:47:16 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:47:19.735148 | orchestrator | 2026-04-01 01:47:19 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:47:19.735249 | orchestrator | 2026-04-01 01:47:19 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:47:19.735290 | orchestrator | 2026-04-01 01:47:19 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:47:22.779812 | orchestrator | 2026-04-01 01:47:22 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:47:22.781951 | orchestrator | 2026-04-01 01:47:22 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:47:22.782133 | orchestrator | 2026-04-01 01:47:22 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:47:25.827862 | orchestrator | 2026-04-01 01:47:25 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:47:25.828888 | orchestrator | 2026-04-01 01:47:25 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:47:25.829030 | orchestrator | 2026-04-01 01:47:25 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:47:28.874312 | orchestrator | 2026-04-01 01:47:28 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:47:28.876133 | orchestrator | 2026-04-01 01:47:28 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:47:28.876184 | orchestrator | 2026-04-01 01:47:28 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:47:31.923583 | orchestrator | 2026-04-01 01:47:31 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:47:31.925426 | orchestrator | 2026-04-01 01:47:31 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:47:31.925586 | orchestrator | 2026-04-01 01:47:31 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:47:34.974448 | orchestrator | 2026-04-01 01:47:34 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:47:34.976082 | orchestrator | 2026-04-01 01:47:34 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:47:34.976172 | orchestrator | 2026-04-01 01:47:34 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:47:38.022097 | orchestrator | 2026-04-01 01:47:38 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:47:38.024042 | orchestrator | 2026-04-01 01:47:38 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:47:38.024100 | orchestrator | 2026-04-01 01:47:38 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:47:41.070748 | orchestrator | 2026-04-01 01:47:41 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:47:41.073385 | orchestrator | 2026-04-01 01:47:41 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:47:41.073467 | orchestrator | 2026-04-01 01:47:41 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:47:44.117168 | orchestrator | 2026-04-01 01:47:44 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:47:44.118266 | orchestrator | 2026-04-01 01:47:44 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:47:44.118340 | orchestrator | 2026-04-01 01:47:44 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:47:47.158699 | orchestrator | 2026-04-01 01:47:47 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:47:47.159815 | orchestrator | 2026-04-01 01:47:47 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:47:47.159903 | orchestrator | 2026-04-01 01:47:47 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:47:50.206877 | orchestrator | 2026-04-01 01:47:50 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:47:50.207673 | orchestrator | 2026-04-01 01:47:50 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:47:50.207716 | orchestrator | 2026-04-01 01:47:50 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:47:53.252214 | orchestrator | 2026-04-01 01:47:53 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:47:53.255247 | orchestrator | 2026-04-01 01:47:53 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:47:53.255333 | orchestrator | 2026-04-01 01:47:53 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:47:56.304520 | orchestrator | 2026-04-01 01:47:56 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:47:56.305809 | orchestrator | 2026-04-01 01:47:56 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:47:56.305845 | orchestrator | 2026-04-01 01:47:56 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:47:59.352259 | orchestrator | 2026-04-01 01:47:59 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:47:59.353749 | orchestrator | 2026-04-01 01:47:59 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:47:59.353781 | orchestrator | 2026-04-01 01:47:59 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:48:02.397805 | orchestrator | 2026-04-01 01:48:02 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:48:02.399866 | orchestrator | 2026-04-01 01:48:02 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:48:02.399928 | orchestrator | 2026-04-01 01:48:02 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:48:05.448384 | orchestrator | 2026-04-01 01:48:05 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:48:05.450458 | orchestrator | 2026-04-01 01:48:05 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:48:05.450514 | orchestrator | 2026-04-01 01:48:05 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:48:08.497313 | orchestrator | 2026-04-01 01:48:08 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:48:08.500294 | orchestrator | 2026-04-01 01:48:08 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:48:08.500354 | orchestrator | 2026-04-01 01:48:08 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:48:11.548823 | orchestrator | 2026-04-01 01:48:11 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:48:11.551026 | orchestrator | 2026-04-01 01:48:11 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:48:11.551139 | orchestrator | 2026-04-01 01:48:11 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:48:14.596335 | orchestrator | 2026-04-01 01:48:14 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:48:14.597536 | orchestrator | 2026-04-01 01:48:14 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:48:14.597605 | orchestrator | 2026-04-01 01:48:14 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:48:17.645955 | orchestrator | 2026-04-01 01:48:17 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:48:17.647808 | orchestrator | 2026-04-01 01:48:17 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:48:17.647955 | orchestrator | 2026-04-01 01:48:17 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:48:20.695372 | orchestrator | 2026-04-01 01:48:20 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:48:20.697523 | orchestrator | 2026-04-01 01:48:20 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:48:20.697610 | orchestrator | 2026-04-01 01:48:20 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:48:23.746486 | orchestrator | 2026-04-01 01:48:23 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:48:23.748136 | orchestrator | 2026-04-01 01:48:23 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:48:23.748176 | orchestrator | 2026-04-01 01:48:23 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:48:26.792688 | orchestrator | 2026-04-01 01:48:26 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:48:26.794415 | orchestrator | 2026-04-01 01:48:26 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:48:26.794556 | orchestrator | 2026-04-01 01:48:26 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:48:29.841860 | orchestrator | 2026-04-01 01:48:29 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:48:29.843583 | orchestrator | 2026-04-01 01:48:29 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:48:29.843914 | orchestrator | 2026-04-01 01:48:29 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:48:32.893490 | orchestrator | 2026-04-01 01:48:32 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:48:32.894753 | orchestrator | 2026-04-01 01:48:32 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:48:32.894812 | orchestrator | 2026-04-01 01:48:32 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:48:35.938445 | orchestrator | 2026-04-01 01:48:35 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:48:35.940704 | orchestrator | 2026-04-01 01:48:35 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:48:35.940762 | orchestrator | 2026-04-01 01:48:35 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:48:38.983959 | orchestrator | 2026-04-01 01:48:38 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:48:38.984996 | orchestrator | 2026-04-01 01:48:38 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:48:38.985027 | orchestrator | 2026-04-01 01:48:38 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:48:42.021125 | orchestrator | 2026-04-01 01:48:42 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:48:42.024250 | orchestrator | 2026-04-01 01:48:42 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:48:42.024290 | orchestrator | 2026-04-01 01:48:42 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:48:45.068526 | orchestrator | 2026-04-01 01:48:45 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:48:45.068626 | orchestrator | 2026-04-01 01:48:45 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:48:45.068642 | orchestrator | 2026-04-01 01:48:45 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:48:48.117851 | orchestrator | 2026-04-01 01:48:48 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:48:48.119791 | orchestrator | 2026-04-01 01:48:48 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:48:48.119944 | orchestrator | 2026-04-01 01:48:48 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:48:51.165533 | orchestrator | 2026-04-01 01:48:51 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:48:51.168072 | orchestrator | 2026-04-01 01:48:51 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:48:51.168225 | orchestrator | 2026-04-01 01:48:51 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:48:54.216267 | orchestrator | 2026-04-01 01:48:54 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:48:54.218100 | orchestrator | 2026-04-01 01:48:54 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:48:54.218217 | orchestrator | 2026-04-01 01:48:54 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:48:57.270767 | orchestrator | 2026-04-01 01:48:57 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:48:57.273678 | orchestrator | 2026-04-01 01:48:57 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:48:57.273721 | orchestrator | 2026-04-01 01:48:57 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:49:00.320168 | orchestrator | 2026-04-01 01:49:00 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:49:00.321636 | orchestrator | 2026-04-01 01:49:00 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:49:00.321718 | orchestrator | 2026-04-01 01:49:00 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:49:03.365715 | orchestrator | 2026-04-01 01:49:03 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:49:03.366893 | orchestrator | 2026-04-01 01:49:03 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:49:03.366950 | orchestrator | 2026-04-01 01:49:03 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:49:06.411865 | orchestrator | 2026-04-01 01:49:06 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:49:06.413602 | orchestrator | 2026-04-01 01:49:06 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:49:06.413661 | orchestrator | 2026-04-01 01:49:06 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:49:09.454123 | orchestrator | 2026-04-01 01:49:09 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:49:09.455735 | orchestrator | 2026-04-01 01:49:09 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:49:09.455832 | orchestrator | 2026-04-01 01:49:09 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:49:12.503157 | orchestrator | 2026-04-01 01:49:12 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:49:12.505302 | orchestrator | 2026-04-01 01:49:12 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:49:12.505345 | orchestrator | 2026-04-01 01:49:12 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:49:15.556162 | orchestrator | 2026-04-01 01:49:15 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:49:15.557944 | orchestrator | 2026-04-01 01:49:15 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:49:15.557978 | orchestrator | 2026-04-01 01:49:15 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:49:18.606153 | orchestrator | 2026-04-01 01:49:18 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:49:18.608740 | orchestrator | 2026-04-01 01:49:18 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:49:18.608789 | orchestrator | 2026-04-01 01:49:18 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:49:21.654364 | orchestrator | 2026-04-01 01:49:21 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:49:21.656559 | orchestrator | 2026-04-01 01:49:21 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:49:21.656686 | orchestrator | 2026-04-01 01:49:21 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:49:24.705423 | orchestrator | 2026-04-01 01:49:24 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:49:24.707171 | orchestrator | 2026-04-01 01:49:24 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:49:24.707225 | orchestrator | 2026-04-01 01:49:24 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:49:27.754342 | orchestrator | 2026-04-01 01:49:27 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:49:27.755374 | orchestrator | 2026-04-01 01:49:27 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:49:27.755421 | orchestrator | 2026-04-01 01:49:27 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:49:30.801416 | orchestrator | 2026-04-01 01:49:30 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:49:30.803149 | orchestrator | 2026-04-01 01:49:30 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:49:30.803389 | orchestrator | 2026-04-01 01:49:30 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:49:33.847083 | orchestrator | 2026-04-01 01:49:33 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:49:33.848491 | orchestrator | 2026-04-01 01:49:33 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:49:33.848522 | orchestrator | 2026-04-01 01:49:33 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:49:36.895624 | orchestrator | 2026-04-01 01:49:36 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:49:36.897582 | orchestrator | 2026-04-01 01:49:36 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:49:36.897658 | orchestrator | 2026-04-01 01:49:36 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:49:39.944623 | orchestrator | 2026-04-01 01:49:39 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:49:39.946132 | orchestrator | 2026-04-01 01:49:39 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:49:39.946166 | orchestrator | 2026-04-01 01:49:39 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:49:42.987761 | orchestrator | 2026-04-01 01:49:42 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:49:42.989400 | orchestrator | 2026-04-01 01:49:42 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:49:42.989444 | orchestrator | 2026-04-01 01:49:42 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:49:46.030264 | orchestrator | 2026-04-01 01:49:46 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:49:46.032530 | orchestrator | 2026-04-01 01:49:46 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:49:46.032662 | orchestrator | 2026-04-01 01:49:46 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:49:49.085673 | orchestrator | 2026-04-01 01:49:49 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:49:49.087559 | orchestrator | 2026-04-01 01:49:49 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:49:49.087607 | orchestrator | 2026-04-01 01:49:49 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:49:52.132473 | orchestrator | 2026-04-01 01:49:52 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:49:52.134485 | orchestrator | 2026-04-01 01:49:52 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:49:52.134538 | orchestrator | 2026-04-01 01:49:52 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:49:55.179424 | orchestrator | 2026-04-01 01:49:55 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:49:55.180540 | orchestrator | 2026-04-01 01:49:55 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:49:55.181036 | orchestrator | 2026-04-01 01:49:55 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:49:58.228121 | orchestrator | 2026-04-01 01:49:58 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:49:58.229886 | orchestrator | 2026-04-01 01:49:58 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:49:58.231114 | orchestrator | 2026-04-01 01:49:58 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:50:01.272729 | orchestrator | 2026-04-01 01:50:01 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:50:01.273939 | orchestrator | 2026-04-01 01:50:01 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:50:01.273976 | orchestrator | 2026-04-01 01:50:01 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:50:04.324695 | orchestrator | 2026-04-01 01:50:04 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:50:04.325860 | orchestrator | 2026-04-01 01:50:04 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:50:04.325936 | orchestrator | 2026-04-01 01:50:04 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:50:07.371571 | orchestrator | 2026-04-01 01:50:07 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:50:07.372677 | orchestrator | 2026-04-01 01:50:07 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:50:07.373467 | orchestrator | 2026-04-01 01:50:07 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:50:10.421566 | orchestrator | 2026-04-01 01:50:10 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:50:10.423541 | orchestrator | 2026-04-01 01:50:10 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:50:10.423643 | orchestrator | 2026-04-01 01:50:10 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:50:13.469228 | orchestrator | 2026-04-01 01:50:13 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:50:13.470913 | orchestrator | 2026-04-01 01:50:13 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:50:13.470970 | orchestrator | 2026-04-01 01:50:13 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:50:16.515534 | orchestrator | 2026-04-01 01:50:16 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:50:16.517558 | orchestrator | 2026-04-01 01:50:16 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:50:16.517617 | orchestrator | 2026-04-01 01:50:16 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:50:19.564524 | orchestrator | 2026-04-01 01:50:19 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:50:19.566913 | orchestrator | 2026-04-01 01:50:19 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:50:19.567035 | orchestrator | 2026-04-01 01:50:19 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:50:22.614339 | orchestrator | 2026-04-01 01:50:22 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:50:22.615816 | orchestrator | 2026-04-01 01:50:22 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:50:22.616075 | orchestrator | 2026-04-01 01:50:22 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:50:25.656777 | orchestrator | 2026-04-01 01:50:25 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:50:25.658548 | orchestrator | 2026-04-01 01:50:25 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:50:25.658611 | orchestrator | 2026-04-01 01:50:25 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:50:28.704126 | orchestrator | 2026-04-01 01:50:28 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:50:28.705375 | orchestrator | 2026-04-01 01:50:28 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:50:28.705504 | orchestrator | 2026-04-01 01:50:28 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:50:31.749490 | orchestrator | 2026-04-01 01:50:31 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:50:31.751039 | orchestrator | 2026-04-01 01:50:31 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:50:31.751066 | orchestrator | 2026-04-01 01:50:31 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:50:34.794130 | orchestrator | 2026-04-01 01:50:34 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:50:34.796174 | orchestrator | 2026-04-01 01:50:34 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:50:34.796263 | orchestrator | 2026-04-01 01:50:34 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:50:37.840002 | orchestrator | 2026-04-01 01:50:37 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:50:37.842190 | orchestrator | 2026-04-01 01:50:37 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:50:37.842245 | orchestrator | 2026-04-01 01:50:37 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:50:40.888997 | orchestrator | 2026-04-01 01:50:40 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:50:40.890463 | orchestrator | 2026-04-01 01:50:40 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:50:40.890564 | orchestrator | 2026-04-01 01:50:40 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:50:43.939423 | orchestrator | 2026-04-01 01:50:43 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:50:43.941186 | orchestrator | 2026-04-01 01:50:43 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:50:43.941259 | orchestrator | 2026-04-01 01:50:43 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:50:46.990236 | orchestrator | 2026-04-01 01:50:46 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:50:46.992236 | orchestrator | 2026-04-01 01:50:46 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:50:46.992291 | orchestrator | 2026-04-01 01:50:46 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:50:50.040718 | orchestrator | 2026-04-01 01:50:50 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:50:50.042903 | orchestrator | 2026-04-01 01:50:50 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:50:50.042982 | orchestrator | 2026-04-01 01:50:50 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:50:53.084648 | orchestrator | 2026-04-01 01:50:53 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:50:53.086644 | orchestrator | 2026-04-01 01:50:53 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:50:53.086694 | orchestrator | 2026-04-01 01:50:53 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:50:56.132617 | orchestrator | 2026-04-01 01:50:56 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:50:56.134201 | orchestrator | 2026-04-01 01:50:56 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:50:56.134244 | orchestrator | 2026-04-01 01:50:56 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:50:59.181577 | orchestrator | 2026-04-01 01:50:59 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:50:59.183008 | orchestrator | 2026-04-01 01:50:59 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:50:59.183571 | orchestrator | 2026-04-01 01:50:59 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:51:02.234252 | orchestrator | 2026-04-01 01:51:02 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:51:02.235724 | orchestrator | 2026-04-01 01:51:02 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:51:02.235939 | orchestrator | 2026-04-01 01:51:02 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:51:05.277015 | orchestrator | 2026-04-01 01:51:05 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:51:05.278400 | orchestrator | 2026-04-01 01:51:05 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:51:05.278484 | orchestrator | 2026-04-01 01:51:05 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:51:08.323007 | orchestrator | 2026-04-01 01:51:08 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:51:08.324276 | orchestrator | 2026-04-01 01:51:08 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:51:08.324486 | orchestrator | 2026-04-01 01:51:08 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:51:11.371852 | orchestrator | 2026-04-01 01:51:11 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:51:11.374370 | orchestrator | 2026-04-01 01:51:11 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:51:11.374439 | orchestrator | 2026-04-01 01:51:11 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:51:14.419103 | orchestrator | 2026-04-01 01:51:14 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:51:14.420821 | orchestrator | 2026-04-01 01:51:14 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:51:14.420893 | orchestrator | 2026-04-01 01:51:14 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:51:17.468897 | orchestrator | 2026-04-01 01:51:17 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:51:17.470290 | orchestrator | 2026-04-01 01:51:17 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:51:17.470322 | orchestrator | 2026-04-01 01:51:17 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:51:20.516309 | orchestrator | 2026-04-01 01:51:20 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:51:20.518189 | orchestrator | 2026-04-01 01:51:20 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:51:20.518286 | orchestrator | 2026-04-01 01:51:20 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:51:23.559964 | orchestrator | 2026-04-01 01:51:23 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:51:23.562105 | orchestrator | 2026-04-01 01:51:23 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:51:23.562268 | orchestrator | 2026-04-01 01:51:23 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:51:26.602320 | orchestrator | 2026-04-01 01:51:26 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:51:26.603501 | orchestrator | 2026-04-01 01:51:26 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:51:26.603541 | orchestrator | 2026-04-01 01:51:26 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:51:29.650991 | orchestrator | 2026-04-01 01:51:29 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:51:29.652822 | orchestrator | 2026-04-01 01:51:29 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:51:29.652850 | orchestrator | 2026-04-01 01:51:29 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:51:32.703407 | orchestrator | 2026-04-01 01:51:32 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:51:32.706237 | orchestrator | 2026-04-01 01:51:32 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:51:32.706354 | orchestrator | 2026-04-01 01:51:32 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:51:35.753176 | orchestrator | 2026-04-01 01:51:35 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:51:35.754712 | orchestrator | 2026-04-01 01:51:35 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:51:35.754785 | orchestrator | 2026-04-01 01:51:35 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:51:38.798679 | orchestrator | 2026-04-01 01:51:38 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:51:38.801266 | orchestrator | 2026-04-01 01:51:38 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:51:38.801356 | orchestrator | 2026-04-01 01:51:38 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:51:41.846868 | orchestrator | 2026-04-01 01:51:41 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:51:41.848113 | orchestrator | 2026-04-01 01:51:41 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:51:41.848163 | orchestrator | 2026-04-01 01:51:41 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:51:44.893811 | orchestrator | 2026-04-01 01:51:44 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:51:44.895816 | orchestrator | 2026-04-01 01:51:44 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:51:44.895900 | orchestrator | 2026-04-01 01:51:44 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:51:47.940330 | orchestrator | 2026-04-01 01:51:47 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:51:47.942233 | orchestrator | 2026-04-01 01:51:47 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:51:47.942312 | orchestrator | 2026-04-01 01:51:47 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:51:50.984791 | orchestrator | 2026-04-01 01:51:50 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:51:50.985507 | orchestrator | 2026-04-01 01:51:50 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:51:50.985548 | orchestrator | 2026-04-01 01:51:50 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:51:54.031222 | orchestrator | 2026-04-01 01:51:54 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:51:54.032794 | orchestrator | 2026-04-01 01:51:54 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:51:54.032906 | orchestrator | 2026-04-01 01:51:54 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:51:57.077251 | orchestrator | 2026-04-01 01:51:57 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:51:57.079843 | orchestrator | 2026-04-01 01:51:57 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:51:57.079964 | orchestrator | 2026-04-01 01:51:57 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:52:00.128036 | orchestrator | 2026-04-01 01:52:00 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:52:00.130687 | orchestrator | 2026-04-01 01:52:00 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:52:00.130748 | orchestrator | 2026-04-01 01:52:00 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:52:03.175188 | orchestrator | 2026-04-01 01:52:03 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:52:03.177605 | orchestrator | 2026-04-01 01:52:03 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:52:03.177855 | orchestrator | 2026-04-01 01:52:03 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:52:06.223006 | orchestrator | 2026-04-01 01:52:06 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:52:06.224577 | orchestrator | 2026-04-01 01:52:06 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:52:06.224613 | orchestrator | 2026-04-01 01:52:06 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:52:09.270085 | orchestrator | 2026-04-01 01:52:09 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:52:09.271722 | orchestrator | 2026-04-01 01:52:09 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:52:09.271775 | orchestrator | 2026-04-01 01:52:09 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:52:12.319641 | orchestrator | 2026-04-01 01:52:12 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:52:12.322719 | orchestrator | 2026-04-01 01:52:12 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:52:12.322782 | orchestrator | 2026-04-01 01:52:12 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:52:15.365744 | orchestrator | 2026-04-01 01:52:15 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:52:15.367265 | orchestrator | 2026-04-01 01:52:15 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:52:15.367377 | orchestrator | 2026-04-01 01:52:15 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:52:18.414268 | orchestrator | 2026-04-01 01:52:18 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:52:18.415386 | orchestrator | 2026-04-01 01:52:18 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:52:18.415442 | orchestrator | 2026-04-01 01:52:18 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:52:21.460561 | orchestrator | 2026-04-01 01:52:21 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:52:21.461830 | orchestrator | 2026-04-01 01:52:21 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:52:21.461876 | orchestrator | 2026-04-01 01:52:21 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:52:24.502205 | orchestrator | 2026-04-01 01:52:24 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:52:24.504260 | orchestrator | 2026-04-01 01:52:24 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:52:24.504311 | orchestrator | 2026-04-01 01:52:24 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:52:27.548685 | orchestrator | 2026-04-01 01:52:27 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:52:27.550164 | orchestrator | 2026-04-01 01:52:27 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:52:27.550208 | orchestrator | 2026-04-01 01:52:27 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:52:30.603999 | orchestrator | 2026-04-01 01:52:30 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:52:30.605538 | orchestrator | 2026-04-01 01:52:30 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:52:30.605583 | orchestrator | 2026-04-01 01:52:30 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:52:33.652074 | orchestrator | 2026-04-01 01:52:33 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:52:33.654921 | orchestrator | 2026-04-01 01:52:33 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:52:33.655024 | orchestrator | 2026-04-01 01:52:33 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:54:36.770595 | orchestrator | 2026-04-01 01:54:36 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:54:36.770719 | orchestrator | 2026-04-01 01:54:36 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:54:36.770738 | orchestrator | 2026-04-01 01:54:36 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:54:39.816166 | orchestrator | 2026-04-01 01:54:39 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:54:39.818280 | orchestrator | 2026-04-01 01:54:39 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:54:39.818403 | orchestrator | 2026-04-01 01:54:39 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:54:42.861673 | orchestrator | 2026-04-01 01:54:42 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:54:42.863525 | orchestrator | 2026-04-01 01:54:42 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:54:42.863584 | orchestrator | 2026-04-01 01:54:42 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:54:45.909627 | orchestrator | 2026-04-01 01:54:45 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:54:45.911770 | orchestrator | 2026-04-01 01:54:45 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:54:45.911825 | orchestrator | 2026-04-01 01:54:45 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:54:48.955283 | orchestrator | 2026-04-01 01:54:48 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:54:48.957277 | orchestrator | 2026-04-01 01:54:48 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:54:48.957318 | orchestrator | 2026-04-01 01:54:48 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:54:52.002601 | orchestrator | 2026-04-01 01:54:51 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:54:52.003923 | orchestrator | 2026-04-01 01:54:51 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:54:52.003962 | orchestrator | 2026-04-01 01:54:51 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:54:55.052913 | orchestrator | 2026-04-01 01:54:55 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:54:55.054424 | orchestrator | 2026-04-01 01:54:55 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:54:55.054486 | orchestrator | 2026-04-01 01:54:55 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:54:58.104598 | orchestrator | 2026-04-01 01:54:58 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:54:58.106483 | orchestrator | 2026-04-01 01:54:58 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:54:58.106639 | orchestrator | 2026-04-01 01:54:58 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:55:01.147551 | orchestrator | 2026-04-01 01:55:01 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:55:01.149684 | orchestrator | 2026-04-01 01:55:01 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:55:01.149742 | orchestrator | 2026-04-01 01:55:01 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:55:04.191957 | orchestrator | 2026-04-01 01:55:04 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:55:04.194343 | orchestrator | 2026-04-01 01:55:04 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:55:04.194597 | orchestrator | 2026-04-01 01:55:04 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:55:07.242919 | orchestrator | 2026-04-01 01:55:07 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:55:07.245331 | orchestrator | 2026-04-01 01:55:07 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:55:07.245600 | orchestrator | 2026-04-01 01:55:07 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:55:10.287589 | orchestrator | 2026-04-01 01:55:10 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:55:10.288843 | orchestrator | 2026-04-01 01:55:10 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:55:10.288887 | orchestrator | 2026-04-01 01:55:10 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:55:13.334247 | orchestrator | 2026-04-01 01:55:13 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:55:13.335823 | orchestrator | 2026-04-01 01:55:13 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:55:13.335908 | orchestrator | 2026-04-01 01:55:13 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:55:16.382864 | orchestrator | 2026-04-01 01:55:16 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:55:16.383955 | orchestrator | 2026-04-01 01:55:16 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:55:16.384139 | orchestrator | 2026-04-01 01:55:16 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:55:19.429861 | orchestrator | 2026-04-01 01:55:19 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:55:19.431501 | orchestrator | 2026-04-01 01:55:19 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:55:19.431559 | orchestrator | 2026-04-01 01:55:19 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:55:22.475350 | orchestrator | 2026-04-01 01:55:22 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:55:22.476787 | orchestrator | 2026-04-01 01:55:22 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:55:22.477091 | orchestrator | 2026-04-01 01:55:22 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:55:25.521438 | orchestrator | 2026-04-01 01:55:25 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:55:25.523108 | orchestrator | 2026-04-01 01:55:25 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:55:25.523250 | orchestrator | 2026-04-01 01:55:25 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:55:28.568020 | orchestrator | 2026-04-01 01:55:28 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:55:28.569396 | orchestrator | 2026-04-01 01:55:28 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:55:28.569519 | orchestrator | 2026-04-01 01:55:28 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:55:31.615644 | orchestrator | 2026-04-01 01:55:31 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:55:31.617970 | orchestrator | 2026-04-01 01:55:31 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:55:31.618072 | orchestrator | 2026-04-01 01:55:31 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:55:34.664736 | orchestrator | 2026-04-01 01:55:34 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:55:34.666550 | orchestrator | 2026-04-01 01:55:34 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:55:34.666614 | orchestrator | 2026-04-01 01:55:34 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:55:37.715320 | orchestrator | 2026-04-01 01:55:37 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:55:37.717558 | orchestrator | 2026-04-01 01:55:37 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:55:37.717622 | orchestrator | 2026-04-01 01:55:37 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:55:40.759065 | orchestrator | 2026-04-01 01:55:40 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:55:40.760763 | orchestrator | 2026-04-01 01:55:40 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:55:40.760823 | orchestrator | 2026-04-01 01:55:40 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:55:43.802965 | orchestrator | 2026-04-01 01:55:43 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:55:43.804886 | orchestrator | 2026-04-01 01:55:43 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:55:43.805057 | orchestrator | 2026-04-01 01:55:43 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:55:46.846269 | orchestrator | 2026-04-01 01:55:46 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:55:46.848275 | orchestrator | 2026-04-01 01:55:46 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:55:46.848345 | orchestrator | 2026-04-01 01:55:46 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:55:49.891700 | orchestrator | 2026-04-01 01:55:49 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:55:49.894376 | orchestrator | 2026-04-01 01:55:49 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:55:49.894444 | orchestrator | 2026-04-01 01:55:49 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:55:52.936642 | orchestrator | 2026-04-01 01:55:52 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:55:52.938582 | orchestrator | 2026-04-01 01:55:52 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:55:52.938666 | orchestrator | 2026-04-01 01:55:52 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:55:55.985822 | orchestrator | 2026-04-01 01:55:55 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:55:55.988384 | orchestrator | 2026-04-01 01:55:55 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:55:55.988463 | orchestrator | 2026-04-01 01:55:55 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:55:59.030779 | orchestrator | 2026-04-01 01:55:59 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:55:59.031475 | orchestrator | 2026-04-01 01:55:59 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:55:59.031519 | orchestrator | 2026-04-01 01:55:59 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:56:02.079322 | orchestrator | 2026-04-01 01:56:02 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:56:02.080557 | orchestrator | 2026-04-01 01:56:02 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:56:02.080647 | orchestrator | 2026-04-01 01:56:02 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:56:05.125320 | orchestrator | 2026-04-01 01:56:05 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:56:05.126978 | orchestrator | 2026-04-01 01:56:05 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:56:05.127038 | orchestrator | 2026-04-01 01:56:05 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:56:08.171731 | orchestrator | 2026-04-01 01:56:08 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:56:08.173442 | orchestrator | 2026-04-01 01:56:08 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:56:08.173591 | orchestrator | 2026-04-01 01:56:08 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:56:11.220542 | orchestrator | 2026-04-01 01:56:11 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:56:11.222803 | orchestrator | 2026-04-01 01:56:11 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:56:11.223032 | orchestrator | 2026-04-01 01:56:11 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:56:14.273807 | orchestrator | 2026-04-01 01:56:14 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:56:14.276279 | orchestrator | 2026-04-01 01:56:14 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:56:14.276328 | orchestrator | 2026-04-01 01:56:14 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:56:17.327446 | orchestrator | 2026-04-01 01:56:17 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:56:17.328926 | orchestrator | 2026-04-01 01:56:17 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:56:17.328988 | orchestrator | 2026-04-01 01:56:17 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:56:20.369679 | orchestrator | 2026-04-01 01:56:20 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:56:20.371438 | orchestrator | 2026-04-01 01:56:20 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:56:20.371500 | orchestrator | 2026-04-01 01:56:20 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:56:23.414795 | orchestrator | 2026-04-01 01:56:23 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:56:23.417679 | orchestrator | 2026-04-01 01:56:23 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:56:23.417752 | orchestrator | 2026-04-01 01:56:23 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:56:26.458405 | orchestrator | 2026-04-01 01:56:26 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:56:26.460442 | orchestrator | 2026-04-01 01:56:26 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:56:26.460534 | orchestrator | 2026-04-01 01:56:26 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:56:29.504767 | orchestrator | 2026-04-01 01:56:29 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:56:29.507085 | orchestrator | 2026-04-01 01:56:29 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:56:29.507143 | orchestrator | 2026-04-01 01:56:29 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:56:32.554228 | orchestrator | 2026-04-01 01:56:32 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:56:32.556036 | orchestrator | 2026-04-01 01:56:32 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:56:32.556087 | orchestrator | 2026-04-01 01:56:32 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:56:35.601881 | orchestrator | 2026-04-01 01:56:35 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:56:35.604926 | orchestrator | 2026-04-01 01:56:35 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:56:35.604978 | orchestrator | 2026-04-01 01:56:35 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:56:38.655623 | orchestrator | 2026-04-01 01:56:38 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:56:38.657506 | orchestrator | 2026-04-01 01:56:38 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:56:38.657572 | orchestrator | 2026-04-01 01:56:38 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:56:41.700851 | orchestrator | 2026-04-01 01:56:41 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:56:41.703142 | orchestrator | 2026-04-01 01:56:41 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:56:41.703225 | orchestrator | 2026-04-01 01:56:41 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:56:44.745539 | orchestrator | 2026-04-01 01:56:44 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:56:44.748389 | orchestrator | 2026-04-01 01:56:44 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:56:44.748484 | orchestrator | 2026-04-01 01:56:44 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:56:47.803147 | orchestrator | 2026-04-01 01:56:47 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:56:47.804642 | orchestrator | 2026-04-01 01:56:47 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:56:47.804701 | orchestrator | 2026-04-01 01:56:47 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:56:50.849716 | orchestrator | 2026-04-01 01:56:50 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:56:50.852135 | orchestrator | 2026-04-01 01:56:50 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:56:50.852218 | orchestrator | 2026-04-01 01:56:50 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:56:53.895035 | orchestrator | 2026-04-01 01:56:53 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:56:53.896942 | orchestrator | 2026-04-01 01:56:53 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:56:53.897012 | orchestrator | 2026-04-01 01:56:53 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:56:56.946574 | orchestrator | 2026-04-01 01:56:56 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:56:56.948424 | orchestrator | 2026-04-01 01:56:56 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:56:56.948525 | orchestrator | 2026-04-01 01:56:56 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:56:59.987223 | orchestrator | 2026-04-01 01:56:59 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:56:59.988725 | orchestrator | 2026-04-01 01:56:59 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:56:59.988949 | orchestrator | 2026-04-01 01:56:59 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:57:03.032737 | orchestrator | 2026-04-01 01:57:03 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:57:03.033570 | orchestrator | 2026-04-01 01:57:03 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:57:03.033635 | orchestrator | 2026-04-01 01:57:03 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:57:06.077522 | orchestrator | 2026-04-01 01:57:06 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:57:06.079323 | orchestrator | 2026-04-01 01:57:06 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:57:06.079420 | orchestrator | 2026-04-01 01:57:06 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:57:09.119288 | orchestrator | 2026-04-01 01:57:09 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:57:09.120489 | orchestrator | 2026-04-01 01:57:09 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:57:09.120574 | orchestrator | 2026-04-01 01:57:09 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:57:12.164873 | orchestrator | 2026-04-01 01:57:12 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:57:12.166533 | orchestrator | 2026-04-01 01:57:12 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:57:12.166645 | orchestrator | 2026-04-01 01:57:12 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:57:15.212187 | orchestrator | 2026-04-01 01:57:15 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:57:15.214124 | orchestrator | 2026-04-01 01:57:15 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:57:15.214192 | orchestrator | 2026-04-01 01:57:15 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:57:18.261758 | orchestrator | 2026-04-01 01:57:18 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:57:18.262929 | orchestrator | 2026-04-01 01:57:18 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:57:18.262965 | orchestrator | 2026-04-01 01:57:18 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:57:21.307172 | orchestrator | 2026-04-01 01:57:21 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:57:21.309799 | orchestrator | 2026-04-01 01:57:21 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:57:21.309856 | orchestrator | 2026-04-01 01:57:21 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:57:24.354909 | orchestrator | 2026-04-01 01:57:24 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:57:24.357654 | orchestrator | 2026-04-01 01:57:24 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:57:24.357727 | orchestrator | 2026-04-01 01:57:24 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:57:27.403573 | orchestrator | 2026-04-01 01:57:27 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:57:27.405366 | orchestrator | 2026-04-01 01:57:27 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:57:27.405434 | orchestrator | 2026-04-01 01:57:27 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:57:30.448932 | orchestrator | 2026-04-01 01:57:30 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:57:30.451098 | orchestrator | 2026-04-01 01:57:30 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:57:30.451142 | orchestrator | 2026-04-01 01:57:30 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:57:33.495024 | orchestrator | 2026-04-01 01:57:33 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:57:33.497464 | orchestrator | 2026-04-01 01:57:33 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:57:33.497553 | orchestrator | 2026-04-01 01:57:33 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:57:36.545137 | orchestrator | 2026-04-01 01:57:36 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:57:36.546845 | orchestrator | 2026-04-01 01:57:36 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:57:36.546909 | orchestrator | 2026-04-01 01:57:36 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:57:39.595282 | orchestrator | 2026-04-01 01:57:39 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:57:39.597079 | orchestrator | 2026-04-01 01:57:39 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:57:39.597211 | orchestrator | 2026-04-01 01:57:39 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:57:42.638657 | orchestrator | 2026-04-01 01:57:42 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:57:42.642157 | orchestrator | 2026-04-01 01:57:42 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:57:42.642213 | orchestrator | 2026-04-01 01:57:42 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:57:45.684045 | orchestrator | 2026-04-01 01:57:45 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:57:45.685678 | orchestrator | 2026-04-01 01:57:45 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:57:45.686090 | orchestrator | 2026-04-01 01:57:45 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:57:48.736048 | orchestrator | 2026-04-01 01:57:48 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:57:48.737687 | orchestrator | 2026-04-01 01:57:48 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:57:48.737740 | orchestrator | 2026-04-01 01:57:48 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:57:51.787578 | orchestrator | 2026-04-01 01:57:51 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:57:51.789054 | orchestrator | 2026-04-01 01:57:51 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:57:51.789083 | orchestrator | 2026-04-01 01:57:51 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:57:54.840426 | orchestrator | 2026-04-01 01:57:54 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:57:54.842892 | orchestrator | 2026-04-01 01:57:54 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:57:54.842951 | orchestrator | 2026-04-01 01:57:54 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:57:57.885292 | orchestrator | 2026-04-01 01:57:57 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:57:57.887143 | orchestrator | 2026-04-01 01:57:57 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:57:57.887229 | orchestrator | 2026-04-01 01:57:57 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:58:00.929419 | orchestrator | 2026-04-01 01:58:00 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:58:00.932193 | orchestrator | 2026-04-01 01:58:00 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:58:00.932252 | orchestrator | 2026-04-01 01:58:00 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:58:03.982936 | orchestrator | 2026-04-01 01:58:03 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:58:03.984360 | orchestrator | 2026-04-01 01:58:03 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:58:03.984416 | orchestrator | 2026-04-01 01:58:03 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:58:07.035451 | orchestrator | 2026-04-01 01:58:07 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:58:07.037445 | orchestrator | 2026-04-01 01:58:07 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:58:07.037709 | orchestrator | 2026-04-01 01:58:07 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:58:10.084955 | orchestrator | 2026-04-01 01:58:10 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:58:10.087180 | orchestrator | 2026-04-01 01:58:10 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:58:10.087333 | orchestrator | 2026-04-01 01:58:10 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:58:13.128254 | orchestrator | 2026-04-01 01:58:13 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:58:13.129828 | orchestrator | 2026-04-01 01:58:13 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:58:13.129969 | orchestrator | 2026-04-01 01:58:13 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:58:16.174618 | orchestrator | 2026-04-01 01:58:16 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:58:16.175795 | orchestrator | 2026-04-01 01:58:16 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:58:16.175918 | orchestrator | 2026-04-01 01:58:16 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:58:19.227280 | orchestrator | 2026-04-01 01:58:19 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:58:19.229355 | orchestrator | 2026-04-01 01:58:19 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:58:19.229476 | orchestrator | 2026-04-01 01:58:19 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:58:22.272192 | orchestrator | 2026-04-01 01:58:22 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:58:22.274145 | orchestrator | 2026-04-01 01:58:22 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:58:22.274584 | orchestrator | 2026-04-01 01:58:22 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:58:25.320687 | orchestrator | 2026-04-01 01:58:25 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:58:25.322307 | orchestrator | 2026-04-01 01:58:25 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:58:25.322457 | orchestrator | 2026-04-01 01:58:25 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:58:28.368406 | orchestrator | 2026-04-01 01:58:28 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:58:28.369660 | orchestrator | 2026-04-01 01:58:28 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:58:28.369821 | orchestrator | 2026-04-01 01:58:28 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:58:31.415553 | orchestrator | 2026-04-01 01:58:31 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:58:31.416902 | orchestrator | 2026-04-01 01:58:31 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:58:31.417090 | orchestrator | 2026-04-01 01:58:31 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:58:34.463172 | orchestrator | 2026-04-01 01:58:34 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:58:34.464522 | orchestrator | 2026-04-01 01:58:34 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:58:34.464692 | orchestrator | 2026-04-01 01:58:34 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:58:37.508922 | orchestrator | 2026-04-01 01:58:37 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:58:37.509129 | orchestrator | 2026-04-01 01:58:37 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:58:37.509159 | orchestrator | 2026-04-01 01:58:37 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:58:40.555805 | orchestrator | 2026-04-01 01:58:40 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:58:40.557147 | orchestrator | 2026-04-01 01:58:40 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:58:40.557821 | orchestrator | 2026-04-01 01:58:40 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:58:43.601904 | orchestrator | 2026-04-01 01:58:43 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:58:43.603477 | orchestrator | 2026-04-01 01:58:43 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:58:43.603674 | orchestrator | 2026-04-01 01:58:43 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:58:46.652034 | orchestrator | 2026-04-01 01:58:46 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:58:46.654323 | orchestrator | 2026-04-01 01:58:46 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:58:46.654392 | orchestrator | 2026-04-01 01:58:46 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:58:49.698707 | orchestrator | 2026-04-01 01:58:49 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:58:49.699498 | orchestrator | 2026-04-01 01:58:49 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:58:49.699539 | orchestrator | 2026-04-01 01:58:49 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:58:52.746662 | orchestrator | 2026-04-01 01:58:52 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:58:52.747909 | orchestrator | 2026-04-01 01:58:52 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:58:52.747984 | orchestrator | 2026-04-01 01:58:52 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:58:55.794298 | orchestrator | 2026-04-01 01:58:55 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:58:55.796503 | orchestrator | 2026-04-01 01:58:55 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:58:55.796637 | orchestrator | 2026-04-01 01:58:55 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:58:58.840112 | orchestrator | 2026-04-01 01:58:58 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:58:58.841376 | orchestrator | 2026-04-01 01:58:58 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:58:58.841409 | orchestrator | 2026-04-01 01:58:58 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:59:01.891720 | orchestrator | 2026-04-01 01:59:01 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:59:01.893888 | orchestrator | 2026-04-01 01:59:01 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:59:01.893963 | orchestrator | 2026-04-01 01:59:01 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:59:04.943727 | orchestrator | 2026-04-01 01:59:04 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:59:04.945487 | orchestrator | 2026-04-01 01:59:04 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:59:04.945527 | orchestrator | 2026-04-01 01:59:04 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:59:07.991689 | orchestrator | 2026-04-01 01:59:07 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:59:07.994291 | orchestrator | 2026-04-01 01:59:07 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:59:07.994472 | orchestrator | 2026-04-01 01:59:07 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:59:11.037464 | orchestrator | 2026-04-01 01:59:11 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:59:11.039328 | orchestrator | 2026-04-01 01:59:11 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:59:11.039368 | orchestrator | 2026-04-01 01:59:11 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:59:14.082567 | orchestrator | 2026-04-01 01:59:14 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:59:14.084406 | orchestrator | 2026-04-01 01:59:14 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:59:14.084445 | orchestrator | 2026-04-01 01:59:14 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:59:17.135179 | orchestrator | 2026-04-01 01:59:17 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:59:17.136583 | orchestrator | 2026-04-01 01:59:17 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:59:17.136671 | orchestrator | 2026-04-01 01:59:17 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:59:20.181847 | orchestrator | 2026-04-01 01:59:20 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:59:20.183024 | orchestrator | 2026-04-01 01:59:20 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:59:20.183070 | orchestrator | 2026-04-01 01:59:20 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:59:23.225883 | orchestrator | 2026-04-01 01:59:23 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:59:23.228505 | orchestrator | 2026-04-01 01:59:23 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:59:23.228586 | orchestrator | 2026-04-01 01:59:23 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:59:26.275188 | orchestrator | 2026-04-01 01:59:26 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:59:26.277605 | orchestrator | 2026-04-01 01:59:26 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:59:26.277673 | orchestrator | 2026-04-01 01:59:26 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:59:29.323842 | orchestrator | 2026-04-01 01:59:29 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:59:29.325874 | orchestrator | 2026-04-01 01:59:29 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:59:29.325989 | orchestrator | 2026-04-01 01:59:29 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:59:32.373845 | orchestrator | 2026-04-01 01:59:32 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:59:32.375510 | orchestrator | 2026-04-01 01:59:32 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:59:32.375567 | orchestrator | 2026-04-01 01:59:32 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:59:35.424260 | orchestrator | 2026-04-01 01:59:35 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:59:35.426253 | orchestrator | 2026-04-01 01:59:35 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:59:35.426329 | orchestrator | 2026-04-01 01:59:35 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:59:38.471476 | orchestrator | 2026-04-01 01:59:38 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:59:38.473011 | orchestrator | 2026-04-01 01:59:38 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:59:38.473090 | orchestrator | 2026-04-01 01:59:38 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:59:41.515395 | orchestrator | 2026-04-01 01:59:41 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:59:41.517489 | orchestrator | 2026-04-01 01:59:41 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:59:41.517540 | orchestrator | 2026-04-01 01:59:41 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:59:44.563855 | orchestrator | 2026-04-01 01:59:44 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:59:44.565797 | orchestrator | 2026-04-01 01:59:44 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:59:44.565860 | orchestrator | 2026-04-01 01:59:44 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:59:47.611296 | orchestrator | 2026-04-01 01:59:47 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:59:47.613180 | orchestrator | 2026-04-01 01:59:47 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:59:47.613271 | orchestrator | 2026-04-01 01:59:47 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:59:50.664250 | orchestrator | 2026-04-01 01:59:50 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:59:50.666229 | orchestrator | 2026-04-01 01:59:50 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:59:50.666330 | orchestrator | 2026-04-01 01:59:50 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:59:53.708594 | orchestrator | 2026-04-01 01:59:53 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:59:53.711170 | orchestrator | 2026-04-01 01:59:53 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:59:53.711326 | orchestrator | 2026-04-01 01:59:53 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:59:56.750490 | orchestrator | 2026-04-01 01:59:56 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:59:56.752043 | orchestrator | 2026-04-01 01:59:56 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:59:56.752072 | orchestrator | 2026-04-01 01:59:56 | INFO  | Wait 1 second(s) until the next check 2026-04-01 01:59:59.794287 | orchestrator | 2026-04-01 01:59:59 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 01:59:59.795954 | orchestrator | 2026-04-01 01:59:59 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 01:59:59.795998 | orchestrator | 2026-04-01 01:59:59 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:00:02.842093 | orchestrator | 2026-04-01 02:00:02 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:00:02.844330 | orchestrator | 2026-04-01 02:00:02 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:00:02.844446 | orchestrator | 2026-04-01 02:00:02 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:00:05.891430 | orchestrator | 2026-04-01 02:00:05 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:00:05.893657 | orchestrator | 2026-04-01 02:00:05 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:00:05.893792 | orchestrator | 2026-04-01 02:00:05 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:00:08.939146 | orchestrator | 2026-04-01 02:00:08 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:00:08.940193 | orchestrator | 2026-04-01 02:00:08 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:00:08.940499 | orchestrator | 2026-04-01 02:00:08 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:00:11.985366 | orchestrator | 2026-04-01 02:00:11 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:00:11.988019 | orchestrator | 2026-04-01 02:00:11 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:00:11.988108 | orchestrator | 2026-04-01 02:00:11 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:00:15.031483 | orchestrator | 2026-04-01 02:00:15 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:00:15.033161 | orchestrator | 2026-04-01 02:00:15 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:00:15.033231 | orchestrator | 2026-04-01 02:00:15 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:00:18.076451 | orchestrator | 2026-04-01 02:00:18 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:00:18.078607 | orchestrator | 2026-04-01 02:00:18 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:00:18.078662 | orchestrator | 2026-04-01 02:00:18 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:00:21.127000 | orchestrator | 2026-04-01 02:00:21 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:00:21.129163 | orchestrator | 2026-04-01 02:00:21 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:00:21.129239 | orchestrator | 2026-04-01 02:00:21 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:00:24.176219 | orchestrator | 2026-04-01 02:00:24 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:00:24.177874 | orchestrator | 2026-04-01 02:00:24 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:00:24.177934 | orchestrator | 2026-04-01 02:00:24 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:00:27.222543 | orchestrator | 2026-04-01 02:00:27 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:00:27.224755 | orchestrator | 2026-04-01 02:00:27 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:00:27.224836 | orchestrator | 2026-04-01 02:00:27 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:00:30.271972 | orchestrator | 2026-04-01 02:00:30 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:00:30.272822 | orchestrator | 2026-04-01 02:00:30 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:00:30.272877 | orchestrator | 2026-04-01 02:00:30 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:00:33.321604 | orchestrator | 2026-04-01 02:00:33 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:00:33.325396 | orchestrator | 2026-04-01 02:00:33 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:00:33.325465 | orchestrator | 2026-04-01 02:00:33 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:00:36.371338 | orchestrator | 2026-04-01 02:00:36 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:00:36.372742 | orchestrator | 2026-04-01 02:00:36 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:00:36.372827 | orchestrator | 2026-04-01 02:00:36 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:00:39.416519 | orchestrator | 2026-04-01 02:00:39 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:00:39.419482 | orchestrator | 2026-04-01 02:00:39 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:00:39.419558 | orchestrator | 2026-04-01 02:00:39 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:00:42.461442 | orchestrator | 2026-04-01 02:00:42 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:00:42.464318 | orchestrator | 2026-04-01 02:00:42 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:00:42.464378 | orchestrator | 2026-04-01 02:00:42 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:00:45.505557 | orchestrator | 2026-04-01 02:00:45 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:00:45.507438 | orchestrator | 2026-04-01 02:00:45 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:00:45.507491 | orchestrator | 2026-04-01 02:00:45 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:00:48.551713 | orchestrator | 2026-04-01 02:00:48 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:00:48.553150 | orchestrator | 2026-04-01 02:00:48 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:00:48.553224 | orchestrator | 2026-04-01 02:00:48 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:00:51.598383 | orchestrator | 2026-04-01 02:00:51 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:00:51.600369 | orchestrator | 2026-04-01 02:00:51 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:00:51.600642 | orchestrator | 2026-04-01 02:00:51 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:00:54.646098 | orchestrator | 2026-04-01 02:00:54 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:00:54.647516 | orchestrator | 2026-04-01 02:00:54 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:00:54.647566 | orchestrator | 2026-04-01 02:00:54 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:00:57.690757 | orchestrator | 2026-04-01 02:00:57 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:00:57.692550 | orchestrator | 2026-04-01 02:00:57 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:00:57.692801 | orchestrator | 2026-04-01 02:00:57 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:01:00.736124 | orchestrator | 2026-04-01 02:01:00 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:01:00.738382 | orchestrator | 2026-04-01 02:01:00 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:01:00.738489 | orchestrator | 2026-04-01 02:01:00 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:01:03.779642 | orchestrator | 2026-04-01 02:01:03 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:01:03.781166 | orchestrator | 2026-04-01 02:01:03 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:01:03.781229 | orchestrator | 2026-04-01 02:01:03 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:01:06.824796 | orchestrator | 2026-04-01 02:01:06 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:01:06.826181 | orchestrator | 2026-04-01 02:01:06 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:01:06.826217 | orchestrator | 2026-04-01 02:01:06 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:01:09.873386 | orchestrator | 2026-04-01 02:01:09 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:01:09.875065 | orchestrator | 2026-04-01 02:01:09 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:01:09.875122 | orchestrator | 2026-04-01 02:01:09 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:01:12.925632 | orchestrator | 2026-04-01 02:01:12 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:01:12.927493 | orchestrator | 2026-04-01 02:01:12 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:01:12.927547 | orchestrator | 2026-04-01 02:01:12 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:01:15.981590 | orchestrator | 2026-04-01 02:01:15 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:01:15.983847 | orchestrator | 2026-04-01 02:01:15 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:01:15.984015 | orchestrator | 2026-04-01 02:01:15 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:01:19.035319 | orchestrator | 2026-04-01 02:01:19 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:01:19.037417 | orchestrator | 2026-04-01 02:01:19 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:01:19.037492 | orchestrator | 2026-04-01 02:01:19 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:01:22.085012 | orchestrator | 2026-04-01 02:01:22 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:01:22.085482 | orchestrator | 2026-04-01 02:01:22 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:01:22.085660 | orchestrator | 2026-04-01 02:01:22 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:01:25.128159 | orchestrator | 2026-04-01 02:01:25 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:01:25.131354 | orchestrator | 2026-04-01 02:01:25 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:01:25.131442 | orchestrator | 2026-04-01 02:01:25 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:01:28.178742 | orchestrator | 2026-04-01 02:01:28 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:01:28.181029 | orchestrator | 2026-04-01 02:01:28 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:01:28.181171 | orchestrator | 2026-04-01 02:01:28 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:01:31.230459 | orchestrator | 2026-04-01 02:01:31 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:01:31.233223 | orchestrator | 2026-04-01 02:01:31 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:01:31.233421 | orchestrator | 2026-04-01 02:01:31 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:01:34.280882 | orchestrator | 2026-04-01 02:01:34 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:01:34.282322 | orchestrator | 2026-04-01 02:01:34 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:01:34.282406 | orchestrator | 2026-04-01 02:01:34 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:01:37.329012 | orchestrator | 2026-04-01 02:01:37 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:01:37.330603 | orchestrator | 2026-04-01 02:01:37 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:01:37.330711 | orchestrator | 2026-04-01 02:01:37 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:01:40.381048 | orchestrator | 2026-04-01 02:01:40 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:01:40.383880 | orchestrator | 2026-04-01 02:01:40 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:01:40.384630 | orchestrator | 2026-04-01 02:01:40 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:01:43.430397 | orchestrator | 2026-04-01 02:01:43 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:01:43.432532 | orchestrator | 2026-04-01 02:01:43 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:01:43.432617 | orchestrator | 2026-04-01 02:01:43 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:01:46.477797 | orchestrator | 2026-04-01 02:01:46 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:01:46.480328 | orchestrator | 2026-04-01 02:01:46 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:01:46.480696 | orchestrator | 2026-04-01 02:01:46 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:01:49.526137 | orchestrator | 2026-04-01 02:01:49 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:01:49.528020 | orchestrator | 2026-04-01 02:01:49 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:01:49.528056 | orchestrator | 2026-04-01 02:01:49 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:01:52.573588 | orchestrator | 2026-04-01 02:01:52 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:01:52.575454 | orchestrator | 2026-04-01 02:01:52 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:01:52.575548 | orchestrator | 2026-04-01 02:01:52 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:01:55.616160 | orchestrator | 2026-04-01 02:01:55 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:01:55.618442 | orchestrator | 2026-04-01 02:01:55 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:01:55.618593 | orchestrator | 2026-04-01 02:01:55 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:01:58.669198 | orchestrator | 2026-04-01 02:01:58 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:01:58.671263 | orchestrator | 2026-04-01 02:01:58 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:01:58.671342 | orchestrator | 2026-04-01 02:01:58 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:02:01.715557 | orchestrator | 2026-04-01 02:02:01 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:02:01.717026 | orchestrator | 2026-04-01 02:02:01 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:02:01.717150 | orchestrator | 2026-04-01 02:02:01 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:02:04.760329 | orchestrator | 2026-04-01 02:02:04 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:02:04.762011 | orchestrator | 2026-04-01 02:02:04 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:02:04.762164 | orchestrator | 2026-04-01 02:02:04 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:02:07.806803 | orchestrator | 2026-04-01 02:02:07 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:02:07.808238 | orchestrator | 2026-04-01 02:02:07 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:02:07.808328 | orchestrator | 2026-04-01 02:02:07 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:02:10.851922 | orchestrator | 2026-04-01 02:02:10 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:02:10.853216 | orchestrator | 2026-04-01 02:02:10 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:02:10.853406 | orchestrator | 2026-04-01 02:02:10 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:02:13.900012 | orchestrator | 2026-04-01 02:02:13 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:02:13.901817 | orchestrator | 2026-04-01 02:02:13 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:02:13.901890 | orchestrator | 2026-04-01 02:02:13 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:02:16.958154 | orchestrator | 2026-04-01 02:02:16 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:02:16.961744 | orchestrator | 2026-04-01 02:02:16 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:02:16.961798 | orchestrator | 2026-04-01 02:02:16 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:02:20.009771 | orchestrator | 2026-04-01 02:02:20 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:02:20.011935 | orchestrator | 2026-04-01 02:02:20 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:02:20.012014 | orchestrator | 2026-04-01 02:02:20 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:02:23.066685 | orchestrator | 2026-04-01 02:02:23 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:02:23.069120 | orchestrator | 2026-04-01 02:02:23 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:02:23.069175 | orchestrator | 2026-04-01 02:02:23 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:02:26.119660 | orchestrator | 2026-04-01 02:02:26 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:02:26.120887 | orchestrator | 2026-04-01 02:02:26 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:02:26.121375 | orchestrator | 2026-04-01 02:02:26 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:02:29.174377 | orchestrator | 2026-04-01 02:02:29 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:02:29.176638 | orchestrator | 2026-04-01 02:02:29 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:02:29.176703 | orchestrator | 2026-04-01 02:02:29 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:02:32.224996 | orchestrator | 2026-04-01 02:02:32 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:02:32.225890 | orchestrator | 2026-04-01 02:02:32 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:02:32.225944 | orchestrator | 2026-04-01 02:02:32 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:02:35.277510 | orchestrator | 2026-04-01 02:02:35 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:02:35.278849 | orchestrator | 2026-04-01 02:02:35 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:02:35.279066 | orchestrator | 2026-04-01 02:02:35 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:02:38.331810 | orchestrator | 2026-04-01 02:02:38 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:02:38.332787 | orchestrator | 2026-04-01 02:02:38 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:02:38.332837 | orchestrator | 2026-04-01 02:02:38 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:02:41.380872 | orchestrator | 2026-04-01 02:02:41 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:02:41.382178 | orchestrator | 2026-04-01 02:02:41 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:02:41.382309 | orchestrator | 2026-04-01 02:02:41 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:02:44.427726 | orchestrator | 2026-04-01 02:02:44 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:02:44.428968 | orchestrator | 2026-04-01 02:02:44 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:02:44.429009 | orchestrator | 2026-04-01 02:02:44 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:02:47.477049 | orchestrator | 2026-04-01 02:02:47 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:02:47.479229 | orchestrator | 2026-04-01 02:02:47 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:02:47.479421 | orchestrator | 2026-04-01 02:02:47 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:02:50.522935 | orchestrator | 2026-04-01 02:02:50 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:02:50.524925 | orchestrator | 2026-04-01 02:02:50 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:02:50.525007 | orchestrator | 2026-04-01 02:02:50 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:02:53.574444 | orchestrator | 2026-04-01 02:02:53 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:02:53.575317 | orchestrator | 2026-04-01 02:02:53 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:02:53.575370 | orchestrator | 2026-04-01 02:02:53 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:02:56.626406 | orchestrator | 2026-04-01 02:02:56 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:02:56.627715 | orchestrator | 2026-04-01 02:02:56 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:02:56.627788 | orchestrator | 2026-04-01 02:02:56 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:02:59.677318 | orchestrator | 2026-04-01 02:02:59 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:02:59.679442 | orchestrator | 2026-04-01 02:02:59 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:02:59.679500 | orchestrator | 2026-04-01 02:02:59 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:03:02.725739 | orchestrator | 2026-04-01 02:03:02 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:03:02.728571 | orchestrator | 2026-04-01 02:03:02 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:03:02.728620 | orchestrator | 2026-04-01 02:03:02 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:03:05.775087 | orchestrator | 2026-04-01 02:03:05 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:03:05.776936 | orchestrator | 2026-04-01 02:03:05 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:03:05.777007 | orchestrator | 2026-04-01 02:03:05 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:03:08.829375 | orchestrator | 2026-04-01 02:03:08 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:03:08.831370 | orchestrator | 2026-04-01 02:03:08 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:03:08.831558 | orchestrator | 2026-04-01 02:03:08 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:03:11.871359 | orchestrator | 2026-04-01 02:03:11 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:03:11.873330 | orchestrator | 2026-04-01 02:03:11 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:03:11.873413 | orchestrator | 2026-04-01 02:03:11 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:03:14.919139 | orchestrator | 2026-04-01 02:03:14 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:03:14.920020 | orchestrator | 2026-04-01 02:03:14 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:03:14.920275 | orchestrator | 2026-04-01 02:03:14 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:03:17.971423 | orchestrator | 2026-04-01 02:03:17 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:03:17.973691 | orchestrator | 2026-04-01 02:03:17 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:03:17.973764 | orchestrator | 2026-04-01 02:03:17 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:03:21.020302 | orchestrator | 2026-04-01 02:03:21 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:03:21.022845 | orchestrator | 2026-04-01 02:03:21 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:03:21.022996 | orchestrator | 2026-04-01 02:03:21 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:03:24.075426 | orchestrator | 2026-04-01 02:03:24 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:03:24.076853 | orchestrator | 2026-04-01 02:03:24 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:03:24.076905 | orchestrator | 2026-04-01 02:03:24 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:03:27.121650 | orchestrator | 2026-04-01 02:03:27 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:03:27.123554 | orchestrator | 2026-04-01 02:03:27 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:03:27.123703 | orchestrator | 2026-04-01 02:03:27 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:03:30.163917 | orchestrator | 2026-04-01 02:03:30 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:03:30.165495 | orchestrator | 2026-04-01 02:03:30 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:03:30.165881 | orchestrator | 2026-04-01 02:03:30 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:03:33.207736 | orchestrator | 2026-04-01 02:03:33 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:03:33.209584 | orchestrator | 2026-04-01 02:03:33 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:03:33.209662 | orchestrator | 2026-04-01 02:03:33 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:03:36.256551 | orchestrator | 2026-04-01 02:03:36 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:03:36.258779 | orchestrator | 2026-04-01 02:03:36 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:03:36.258963 | orchestrator | 2026-04-01 02:03:36 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:03:39.307255 | orchestrator | 2026-04-01 02:03:39 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:03:39.310552 | orchestrator | 2026-04-01 02:03:39 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:03:39.310662 | orchestrator | 2026-04-01 02:03:39 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:03:42.359483 | orchestrator | 2026-04-01 02:03:42 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:03:42.359879 | orchestrator | 2026-04-01 02:03:42 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:03:42.359911 | orchestrator | 2026-04-01 02:03:42 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:03:45.408764 | orchestrator | 2026-04-01 02:03:45 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:03:45.410673 | orchestrator | 2026-04-01 02:03:45 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:03:45.410829 | orchestrator | 2026-04-01 02:03:45 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:03:48.458923 | orchestrator | 2026-04-01 02:03:48 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:03:48.460346 | orchestrator | 2026-04-01 02:03:48 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:03:48.460422 | orchestrator | 2026-04-01 02:03:48 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:03:51.508032 | orchestrator | 2026-04-01 02:03:51 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:03:51.509931 | orchestrator | 2026-04-01 02:03:51 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:03:51.509989 | orchestrator | 2026-04-01 02:03:51 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:03:54.553599 | orchestrator | 2026-04-01 02:03:54 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:03:54.555431 | orchestrator | 2026-04-01 02:03:54 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:03:54.555893 | orchestrator | 2026-04-01 02:03:54 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:03:57.603998 | orchestrator | 2026-04-01 02:03:57 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:03:57.604280 | orchestrator | 2026-04-01 02:03:57 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:03:57.604405 | orchestrator | 2026-04-01 02:03:57 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:04:00.644171 | orchestrator | 2026-04-01 02:04:00 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:04:00.645974 | orchestrator | 2026-04-01 02:04:00 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:04:00.646176 | orchestrator | 2026-04-01 02:04:00 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:04:03.692536 | orchestrator | 2026-04-01 02:04:03 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:04:03.694413 | orchestrator | 2026-04-01 02:04:03 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:04:03.694596 | orchestrator | 2026-04-01 02:04:03 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:04:06.743780 | orchestrator | 2026-04-01 02:04:06 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:04:06.745756 | orchestrator | 2026-04-01 02:04:06 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:04:06.746003 | orchestrator | 2026-04-01 02:04:06 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:04:09.792533 | orchestrator | 2026-04-01 02:04:09 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:04:09.796281 | orchestrator | 2026-04-01 02:04:09 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:04:09.796397 | orchestrator | 2026-04-01 02:04:09 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:04:12.848960 | orchestrator | 2026-04-01 02:04:12 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:04:12.851208 | orchestrator | 2026-04-01 02:04:12 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:04:12.851277 | orchestrator | 2026-04-01 02:04:12 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:04:15.892228 | orchestrator | 2026-04-01 02:04:15 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:04:15.893375 | orchestrator | 2026-04-01 02:04:15 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:04:15.893418 | orchestrator | 2026-04-01 02:04:15 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:04:18.926652 | orchestrator | 2026-04-01 02:04:18 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:04:18.926949 | orchestrator | 2026-04-01 02:04:18 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:04:18.926970 | orchestrator | 2026-04-01 02:04:18 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:04:21.975731 | orchestrator | 2026-04-01 02:04:21 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:04:21.977586 | orchestrator | 2026-04-01 02:04:21 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:04:21.977763 | orchestrator | 2026-04-01 02:04:21 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:04:25.022512 | orchestrator | 2026-04-01 02:04:25 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:04:25.023527 | orchestrator | 2026-04-01 02:04:25 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:04:25.023915 | orchestrator | 2026-04-01 02:04:25 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:04:28.078974 | orchestrator | 2026-04-01 02:04:28 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:04:28.080293 | orchestrator | 2026-04-01 02:04:28 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:04:28.080354 | orchestrator | 2026-04-01 02:04:28 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:04:31.130726 | orchestrator | 2026-04-01 02:04:31 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:04:31.133410 | orchestrator | 2026-04-01 02:04:31 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:04:31.133452 | orchestrator | 2026-04-01 02:04:31 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:04:34.187479 | orchestrator | 2026-04-01 02:04:34 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:04:34.190356 | orchestrator | 2026-04-01 02:04:34 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:04:34.190520 | orchestrator | 2026-04-01 02:04:34 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:04:37.243858 | orchestrator | 2026-04-01 02:04:37 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:04:37.247484 | orchestrator | 2026-04-01 02:04:37 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:04:37.247589 | orchestrator | 2026-04-01 02:04:37 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:04:40.293930 | orchestrator | 2026-04-01 02:04:40 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:04:40.297386 | orchestrator | 2026-04-01 02:04:40 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:04:40.297701 | orchestrator | 2026-04-01 02:04:40 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:04:43.348550 | orchestrator | 2026-04-01 02:04:43 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:04:43.355305 | orchestrator | 2026-04-01 02:04:43 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:04:43.355427 | orchestrator | 2026-04-01 02:04:43 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:04:46.406695 | orchestrator | 2026-04-01 02:04:46 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:04:46.408391 | orchestrator | 2026-04-01 02:04:46 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:04:46.408448 | orchestrator | 2026-04-01 02:04:46 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:04:49.458222 | orchestrator | 2026-04-01 02:04:49 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:04:49.461314 | orchestrator | 2026-04-01 02:04:49 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:04:49.461789 | orchestrator | 2026-04-01 02:04:49 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:04:52.512501 | orchestrator | 2026-04-01 02:04:52 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:04:52.514745 | orchestrator | 2026-04-01 02:04:52 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:04:52.514828 | orchestrator | 2026-04-01 02:04:52 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:04:55.565842 | orchestrator | 2026-04-01 02:04:55 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:04:55.566834 | orchestrator | 2026-04-01 02:04:55 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:04:55.566870 | orchestrator | 2026-04-01 02:04:55 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:04:58.613566 | orchestrator | 2026-04-01 02:04:58 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:04:58.615506 | orchestrator | 2026-04-01 02:04:58 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:04:58.615567 | orchestrator | 2026-04-01 02:04:58 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:05:01.660029 | orchestrator | 2026-04-01 02:05:01 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:05:01.662180 | orchestrator | 2026-04-01 02:05:01 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:05:01.662373 | orchestrator | 2026-04-01 02:05:01 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:05:04.712726 | orchestrator | 2026-04-01 02:05:04 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:05:04.716082 | orchestrator | 2026-04-01 02:05:04 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:05:04.716423 | orchestrator | 2026-04-01 02:05:04 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:05:07.760487 | orchestrator | 2026-04-01 02:05:07 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:05:07.762309 | orchestrator | 2026-04-01 02:05:07 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:05:07.762369 | orchestrator | 2026-04-01 02:05:07 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:05:10.812936 | orchestrator | 2026-04-01 02:05:10 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:05:10.816391 | orchestrator | 2026-04-01 02:05:10 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:05:10.816458 | orchestrator | 2026-04-01 02:05:10 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:05:13.865895 | orchestrator | 2026-04-01 02:05:13 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:05:13.868458 | orchestrator | 2026-04-01 02:05:13 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:05:13.868745 | orchestrator | 2026-04-01 02:05:13 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:05:16.923031 | orchestrator | 2026-04-01 02:05:16 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:05:16.924351 | orchestrator | 2026-04-01 02:05:16 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:05:16.924413 | orchestrator | 2026-04-01 02:05:16 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:05:19.976300 | orchestrator | 2026-04-01 02:05:19 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:05:19.977678 | orchestrator | 2026-04-01 02:05:19 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:05:19.977760 | orchestrator | 2026-04-01 02:05:19 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:05:23.025659 | orchestrator | 2026-04-01 02:05:23 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:05:23.026735 | orchestrator | 2026-04-01 02:05:23 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:05:23.026818 | orchestrator | 2026-04-01 02:05:23 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:05:26.076301 | orchestrator | 2026-04-01 02:05:26 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:05:26.077848 | orchestrator | 2026-04-01 02:05:26 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:05:26.077883 | orchestrator | 2026-04-01 02:05:26 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:05:29.116602 | orchestrator | 2026-04-01 02:05:29 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:05:29.117433 | orchestrator | 2026-04-01 02:05:29 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:05:29.117505 | orchestrator | 2026-04-01 02:05:29 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:05:32.164612 | orchestrator | 2026-04-01 02:05:32 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:05:32.164984 | orchestrator | 2026-04-01 02:05:32 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:05:32.165022 | orchestrator | 2026-04-01 02:05:32 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:05:35.213174 | orchestrator | 2026-04-01 02:05:35 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:05:35.215460 | orchestrator | 2026-04-01 02:05:35 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:05:35.215516 | orchestrator | 2026-04-01 02:05:35 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:05:38.255457 | orchestrator | 2026-04-01 02:05:38 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:05:38.257839 | orchestrator | 2026-04-01 02:05:38 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:05:38.257909 | orchestrator | 2026-04-01 02:05:38 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:05:41.302985 | orchestrator | 2026-04-01 02:05:41 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:05:41.306700 | orchestrator | 2026-04-01 02:05:41 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:05:41.306814 | orchestrator | 2026-04-01 02:05:41 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:05:44.358079 | orchestrator | 2026-04-01 02:05:44 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:05:44.361622 | orchestrator | 2026-04-01 02:05:44 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:05:44.361678 | orchestrator | 2026-04-01 02:05:44 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:05:47.407065 | orchestrator | 2026-04-01 02:05:47 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:05:47.408925 | orchestrator | 2026-04-01 02:05:47 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:05:47.408984 | orchestrator | 2026-04-01 02:05:47 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:05:50.458469 | orchestrator | 2026-04-01 02:05:50 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:05:50.459348 | orchestrator | 2026-04-01 02:05:50 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:05:50.459454 | orchestrator | 2026-04-01 02:05:50 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:05:53.513428 | orchestrator | 2026-04-01 02:05:53 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:05:53.514983 | orchestrator | 2026-04-01 02:05:53 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:05:53.515075 | orchestrator | 2026-04-01 02:05:53 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:05:56.568120 | orchestrator | 2026-04-01 02:05:56 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:05:56.570699 | orchestrator | 2026-04-01 02:05:56 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:05:56.570793 | orchestrator | 2026-04-01 02:05:56 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:05:59.626088 | orchestrator | 2026-04-01 02:05:59 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:05:59.627218 | orchestrator | 2026-04-01 02:05:59 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:05:59.627262 | orchestrator | 2026-04-01 02:05:59 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:06:02.674813 | orchestrator | 2026-04-01 02:06:02 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:06:02.676335 | orchestrator | 2026-04-01 02:06:02 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:06:02.676652 | orchestrator | 2026-04-01 02:06:02 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:06:05.726509 | orchestrator | 2026-04-01 02:06:05 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:06:05.727084 | orchestrator | 2026-04-01 02:06:05 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:06:05.727214 | orchestrator | 2026-04-01 02:06:05 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:06:08.770845 | orchestrator | 2026-04-01 02:06:08 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:06:08.773052 | orchestrator | 2026-04-01 02:06:08 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:06:08.773131 | orchestrator | 2026-04-01 02:06:08 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:06:11.823289 | orchestrator | 2026-04-01 02:06:11 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:06:11.824423 | orchestrator | 2026-04-01 02:06:11 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:06:11.824476 | orchestrator | 2026-04-01 02:06:11 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:06:14.883763 | orchestrator | 2026-04-01 02:06:14 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:06:14.885069 | orchestrator | 2026-04-01 02:06:14 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:06:14.885162 | orchestrator | 2026-04-01 02:06:14 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:06:17.932483 | orchestrator | 2026-04-01 02:06:17 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:06:17.933665 | orchestrator | 2026-04-01 02:06:17 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:06:17.934576 | orchestrator | 2026-04-01 02:06:17 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:06:20.986909 | orchestrator | 2026-04-01 02:06:20 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:06:20.988276 | orchestrator | 2026-04-01 02:06:20 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:06:20.988388 | orchestrator | 2026-04-01 02:06:20 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:06:24.038391 | orchestrator | 2026-04-01 02:06:24 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:06:24.038845 | orchestrator | 2026-04-01 02:06:24 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:06:24.038891 | orchestrator | 2026-04-01 02:06:24 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:06:27.083381 | orchestrator | 2026-04-01 02:06:27 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:06:27.086338 | orchestrator | 2026-04-01 02:06:27 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:06:27.086406 | orchestrator | 2026-04-01 02:06:27 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:06:30.131565 | orchestrator | 2026-04-01 02:06:30 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:06:30.133260 | orchestrator | 2026-04-01 02:06:30 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:06:30.133389 | orchestrator | 2026-04-01 02:06:30 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:06:33.179916 | orchestrator | 2026-04-01 02:06:33 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:06:33.181986 | orchestrator | 2026-04-01 02:06:33 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:06:33.182164 | orchestrator | 2026-04-01 02:06:33 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:06:36.233852 | orchestrator | 2026-04-01 02:06:36 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:06:36.236264 | orchestrator | 2026-04-01 02:06:36 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:06:36.236405 | orchestrator | 2026-04-01 02:06:36 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:06:39.297027 | orchestrator | 2026-04-01 02:06:39 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:06:39.298925 | orchestrator | 2026-04-01 02:06:39 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:06:39.298975 | orchestrator | 2026-04-01 02:06:39 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:06:42.347786 | orchestrator | 2026-04-01 02:06:42 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:06:42.349478 | orchestrator | 2026-04-01 02:06:42 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:06:42.349516 | orchestrator | 2026-04-01 02:06:42 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:06:45.394739 | orchestrator | 2026-04-01 02:06:45 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:06:45.396772 | orchestrator | 2026-04-01 02:06:45 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:06:45.396813 | orchestrator | 2026-04-01 02:06:45 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:06:48.440824 | orchestrator | 2026-04-01 02:06:48 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:06:48.443306 | orchestrator | 2026-04-01 02:06:48 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:06:48.443368 | orchestrator | 2026-04-01 02:06:48 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:06:51.491123 | orchestrator | 2026-04-01 02:06:51 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:06:51.493031 | orchestrator | 2026-04-01 02:06:51 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:06:51.493167 | orchestrator | 2026-04-01 02:06:51 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:06:54.537304 | orchestrator | 2026-04-01 02:06:54 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:06:54.538990 | orchestrator | 2026-04-01 02:06:54 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:06:54.539072 | orchestrator | 2026-04-01 02:06:54 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:06:57.587333 | orchestrator | 2026-04-01 02:06:57 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:06:57.589206 | orchestrator | 2026-04-01 02:06:57 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:06:57.589372 | orchestrator | 2026-04-01 02:06:57 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:07:00.640758 | orchestrator | 2026-04-01 02:07:00 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:07:00.643273 | orchestrator | 2026-04-01 02:07:00 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:07:00.643506 | orchestrator | 2026-04-01 02:07:00 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:07:03.691675 | orchestrator | 2026-04-01 02:07:03 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:07:03.694504 | orchestrator | 2026-04-01 02:07:03 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:07:03.694567 | orchestrator | 2026-04-01 02:07:03 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:07:06.744659 | orchestrator | 2026-04-01 02:07:06 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:07:06.746196 | orchestrator | 2026-04-01 02:07:06 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:07:06.746325 | orchestrator | 2026-04-01 02:07:06 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:07:09.791407 | orchestrator | 2026-04-01 02:07:09 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:07:09.792266 | orchestrator | 2026-04-01 02:07:09 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:07:09.792313 | orchestrator | 2026-04-01 02:07:09 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:07:12.834159 | orchestrator | 2026-04-01 02:07:12 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:07:12.836040 | orchestrator | 2026-04-01 02:07:12 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:07:12.836131 | orchestrator | 2026-04-01 02:07:12 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:07:15.887834 | orchestrator | 2026-04-01 02:07:15 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:07:15.889387 | orchestrator | 2026-04-01 02:07:15 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:07:15.889555 | orchestrator | 2026-04-01 02:07:15 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:07:18.949582 | orchestrator | 2026-04-01 02:07:18 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:07:18.950784 | orchestrator | 2026-04-01 02:07:18 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:07:18.950850 | orchestrator | 2026-04-01 02:07:18 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:07:21.996070 | orchestrator | 2026-04-01 02:07:21 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:07:21.997708 | orchestrator | 2026-04-01 02:07:21 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:07:21.997767 | orchestrator | 2026-04-01 02:07:21 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:07:25.045588 | orchestrator | 2026-04-01 02:07:25 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:07:25.047539 | orchestrator | 2026-04-01 02:07:25 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:07:25.047577 | orchestrator | 2026-04-01 02:07:25 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:07:28.098173 | orchestrator | 2026-04-01 02:07:28 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:07:28.100412 | orchestrator | 2026-04-01 02:07:28 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:07:28.100555 | orchestrator | 2026-04-01 02:07:28 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:07:31.154805 | orchestrator | 2026-04-01 02:07:31 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:07:31.156001 | orchestrator | 2026-04-01 02:07:31 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:07:31.156077 | orchestrator | 2026-04-01 02:07:31 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:07:34.202164 | orchestrator | 2026-04-01 02:07:34 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:07:34.204674 | orchestrator | 2026-04-01 02:07:34 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:07:34.204735 | orchestrator | 2026-04-01 02:07:34 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:07:37.252779 | orchestrator | 2026-04-01 02:07:37 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:07:37.254293 | orchestrator | 2026-04-01 02:07:37 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:07:37.254401 | orchestrator | 2026-04-01 02:07:37 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:07:40.305890 | orchestrator | 2026-04-01 02:07:40 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:07:40.307886 | orchestrator | 2026-04-01 02:07:40 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:07:40.307937 | orchestrator | 2026-04-01 02:07:40 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:07:43.357733 | orchestrator | 2026-04-01 02:07:43 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:07:43.359370 | orchestrator | 2026-04-01 02:07:43 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:07:43.359434 | orchestrator | 2026-04-01 02:07:43 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:07:46.408224 | orchestrator | 2026-04-01 02:07:46 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:07:46.410423 | orchestrator | 2026-04-01 02:07:46 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:07:46.410691 | orchestrator | 2026-04-01 02:07:46 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:07:49.456921 | orchestrator | 2026-04-01 02:07:49 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:07:49.459150 | orchestrator | 2026-04-01 02:07:49 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:07:49.459187 | orchestrator | 2026-04-01 02:07:49 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:07:52.513311 | orchestrator | 2026-04-01 02:07:52 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:07:52.515388 | orchestrator | 2026-04-01 02:07:52 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:07:52.515463 | orchestrator | 2026-04-01 02:07:52 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:07:55.570324 | orchestrator | 2026-04-01 02:07:55 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:07:55.572292 | orchestrator | 2026-04-01 02:07:55 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:07:55.572353 | orchestrator | 2026-04-01 02:07:55 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:07:58.627739 | orchestrator | 2026-04-01 02:07:58 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:07:58.629187 | orchestrator | 2026-04-01 02:07:58 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:07:58.629278 | orchestrator | 2026-04-01 02:07:58 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:08:01.675131 | orchestrator | 2026-04-01 02:08:01 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:08:01.677603 | orchestrator | 2026-04-01 02:08:01 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:08:01.677707 | orchestrator | 2026-04-01 02:08:01 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:08:04.716707 | orchestrator | 2026-04-01 02:08:04 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:08:04.719182 | orchestrator | 2026-04-01 02:08:04 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:08:04.719225 | orchestrator | 2026-04-01 02:08:04 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:08:07.766865 | orchestrator | 2026-04-01 02:08:07 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:08:07.768285 | orchestrator | 2026-04-01 02:08:07 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:08:07.768342 | orchestrator | 2026-04-01 02:08:07 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:08:10.815016 | orchestrator | 2026-04-01 02:08:10 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:08:10.817003 | orchestrator | 2026-04-01 02:08:10 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:08:10.817104 | orchestrator | 2026-04-01 02:08:10 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:08:13.856862 | orchestrator | 2026-04-01 02:08:13 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:08:13.858385 | orchestrator | 2026-04-01 02:08:13 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:08:13.858466 | orchestrator | 2026-04-01 02:08:13 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:08:16.913215 | orchestrator | 2026-04-01 02:08:16 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:08:16.915248 | orchestrator | 2026-04-01 02:08:16 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:08:16.915403 | orchestrator | 2026-04-01 02:08:16 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:08:19.966349 | orchestrator | 2026-04-01 02:08:19 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:08:19.967799 | orchestrator | 2026-04-01 02:08:19 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:08:19.967866 | orchestrator | 2026-04-01 02:08:19 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:08:23.013134 | orchestrator | 2026-04-01 02:08:23 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:08:23.014863 | orchestrator | 2026-04-01 02:08:23 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:08:23.014947 | orchestrator | 2026-04-01 02:08:23 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:08:26.062838 | orchestrator | 2026-04-01 02:08:26 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:08:26.062986 | orchestrator | 2026-04-01 02:08:26 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:08:26.062999 | orchestrator | 2026-04-01 02:08:26 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:08:29.112719 | orchestrator | 2026-04-01 02:08:29 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:08:29.114386 | orchestrator | 2026-04-01 02:08:29 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:08:29.114456 | orchestrator | 2026-04-01 02:08:29 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:08:32.162829 | orchestrator | 2026-04-01 02:08:32 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:08:32.164834 | orchestrator | 2026-04-01 02:08:32 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:08:32.164863 | orchestrator | 2026-04-01 02:08:32 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:08:35.205276 | orchestrator | 2026-04-01 02:08:35 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:08:35.206553 | orchestrator | 2026-04-01 02:08:35 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:08:35.206612 | orchestrator | 2026-04-01 02:08:35 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:08:38.247725 | orchestrator | 2026-04-01 02:08:38 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:08:38.250657 | orchestrator | 2026-04-01 02:08:38 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:08:38.251392 | orchestrator | 2026-04-01 02:08:38 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:08:41.296145 | orchestrator | 2026-04-01 02:08:41 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:08:41.296239 | orchestrator | 2026-04-01 02:08:41 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:08:41.296246 | orchestrator | 2026-04-01 02:08:41 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:08:44.342882 | orchestrator | 2026-04-01 02:08:44 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:08:44.344198 | orchestrator | 2026-04-01 02:08:44 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:08:44.344237 | orchestrator | 2026-04-01 02:08:44 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:08:47.396953 | orchestrator | 2026-04-01 02:08:47 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:08:47.398920 | orchestrator | 2026-04-01 02:08:47 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:08:47.399050 | orchestrator | 2026-04-01 02:08:47 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:08:50.445164 | orchestrator | 2026-04-01 02:08:50 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:08:50.447805 | orchestrator | 2026-04-01 02:08:50 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:08:50.447993 | orchestrator | 2026-04-01 02:08:50 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:08:53.494537 | orchestrator | 2026-04-01 02:08:53 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:08:53.496363 | orchestrator | 2026-04-01 02:08:53 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:08:53.496538 | orchestrator | 2026-04-01 02:08:53 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:08:56.542323 | orchestrator | 2026-04-01 02:08:56 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:08:56.543935 | orchestrator | 2026-04-01 02:08:56 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:08:56.543961 | orchestrator | 2026-04-01 02:08:56 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:08:59.591803 | orchestrator | 2026-04-01 02:08:59 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:08:59.594329 | orchestrator | 2026-04-01 02:08:59 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:08:59.594402 | orchestrator | 2026-04-01 02:08:59 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:09:02.645952 | orchestrator | 2026-04-01 02:09:02 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:09:02.647315 | orchestrator | 2026-04-01 02:09:02 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:09:02.647396 | orchestrator | 2026-04-01 02:09:02 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:09:05.694968 | orchestrator | 2026-04-01 02:09:05 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:09:05.696872 | orchestrator | 2026-04-01 02:09:05 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:09:05.696906 | orchestrator | 2026-04-01 02:09:05 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:09:08.740712 | orchestrator | 2026-04-01 02:09:08 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:09:08.742863 | orchestrator | 2026-04-01 02:09:08 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:09:08.743046 | orchestrator | 2026-04-01 02:09:08 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:09:11.788990 | orchestrator | 2026-04-01 02:09:11 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:09:11.791341 | orchestrator | 2026-04-01 02:09:11 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:09:11.791405 | orchestrator | 2026-04-01 02:09:11 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:09:14.833923 | orchestrator | 2026-04-01 02:09:14 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:09:14.836249 | orchestrator | 2026-04-01 02:09:14 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:09:14.836300 | orchestrator | 2026-04-01 02:09:14 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:09:17.877937 | orchestrator | 2026-04-01 02:09:17 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:09:17.880574 | orchestrator | 2026-04-01 02:09:17 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:09:17.880777 | orchestrator | 2026-04-01 02:09:17 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:09:20.933996 | orchestrator | 2026-04-01 02:09:20 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:09:20.935105 | orchestrator | 2026-04-01 02:09:20 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:09:20.935137 | orchestrator | 2026-04-01 02:09:20 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:09:23.974732 | orchestrator | 2026-04-01 02:09:23 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:09:23.976129 | orchestrator | 2026-04-01 02:09:23 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:09:23.976207 | orchestrator | 2026-04-01 02:09:23 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:09:27.027934 | orchestrator | 2026-04-01 02:09:27 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:09:27.029089 | orchestrator | 2026-04-01 02:09:27 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:09:27.029138 | orchestrator | 2026-04-01 02:09:27 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:09:30.080996 | orchestrator | 2026-04-01 02:09:30 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:09:30.083587 | orchestrator | 2026-04-01 02:09:30 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:09:30.083664 | orchestrator | 2026-04-01 02:09:30 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:09:33.137313 | orchestrator | 2026-04-01 02:09:33 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:09:33.138660 | orchestrator | 2026-04-01 02:09:33 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:09:33.138735 | orchestrator | 2026-04-01 02:09:33 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:09:36.184210 | orchestrator | 2026-04-01 02:09:36 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:09:36.185884 | orchestrator | 2026-04-01 02:09:36 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:09:36.185948 | orchestrator | 2026-04-01 02:09:36 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:09:39.228601 | orchestrator | 2026-04-01 02:09:39 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:09:39.231639 | orchestrator | 2026-04-01 02:09:39 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:09:39.231846 | orchestrator | 2026-04-01 02:09:39 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:09:42.288411 | orchestrator | 2026-04-01 02:09:42 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:09:42.291257 | orchestrator | 2026-04-01 02:09:42 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:09:42.291401 | orchestrator | 2026-04-01 02:09:42 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:09:45.341862 | orchestrator | 2026-04-01 02:09:45 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:09:45.344475 | orchestrator | 2026-04-01 02:09:45 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:09:45.344542 | orchestrator | 2026-04-01 02:09:45 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:09:48.396162 | orchestrator | 2026-04-01 02:09:48 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:09:48.396240 | orchestrator | 2026-04-01 02:09:48 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:09:48.396280 | orchestrator | 2026-04-01 02:09:48 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:09:51.442247 | orchestrator | 2026-04-01 02:09:51 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:09:51.443904 | orchestrator | 2026-04-01 02:09:51 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:09:51.444112 | orchestrator | 2026-04-01 02:09:51 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:09:54.490096 | orchestrator | 2026-04-01 02:09:54 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:09:54.491439 | orchestrator | 2026-04-01 02:09:54 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:09:54.491492 | orchestrator | 2026-04-01 02:09:54 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:09:57.538814 | orchestrator | 2026-04-01 02:09:57 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:09:57.541061 | orchestrator | 2026-04-01 02:09:57 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:09:57.541190 | orchestrator | 2026-04-01 02:09:57 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:10:00.590966 | orchestrator | 2026-04-01 02:10:00 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:10:00.593623 | orchestrator | 2026-04-01 02:10:00 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:10:00.593792 | orchestrator | 2026-04-01 02:10:00 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:10:03.645476 | orchestrator | 2026-04-01 02:10:03 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:10:03.647320 | orchestrator | 2026-04-01 02:10:03 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:10:03.647378 | orchestrator | 2026-04-01 02:10:03 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:10:06.689938 | orchestrator | 2026-04-01 02:10:06 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:10:06.691307 | orchestrator | 2026-04-01 02:10:06 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:10:06.691377 | orchestrator | 2026-04-01 02:10:06 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:10:09.738309 | orchestrator | 2026-04-01 02:10:09 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:10:09.740486 | orchestrator | 2026-04-01 02:10:09 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:10:09.740550 | orchestrator | 2026-04-01 02:10:09 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:10:12.785319 | orchestrator | 2026-04-01 02:10:12 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:10:12.787309 | orchestrator | 2026-04-01 02:10:12 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:10:12.787362 | orchestrator | 2026-04-01 02:10:12 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:10:15.832732 | orchestrator | 2026-04-01 02:10:15 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:10:15.834534 | orchestrator | 2026-04-01 02:10:15 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:10:15.834676 | orchestrator | 2026-04-01 02:10:15 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:10:18.886685 | orchestrator | 2026-04-01 02:10:18 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:10:18.889449 | orchestrator | 2026-04-01 02:10:18 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:10:18.889528 | orchestrator | 2026-04-01 02:10:18 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:10:21.935732 | orchestrator | 2026-04-01 02:10:21 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:10:21.939145 | orchestrator | 2026-04-01 02:10:21 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:10:21.939281 | orchestrator | 2026-04-01 02:10:21 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:10:24.987507 | orchestrator | 2026-04-01 02:10:24 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:10:24.988553 | orchestrator | 2026-04-01 02:10:24 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:10:24.988616 | orchestrator | 2026-04-01 02:10:24 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:10:28.033248 | orchestrator | 2026-04-01 02:10:28 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:10:28.034139 | orchestrator | 2026-04-01 02:10:28 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:10:28.034216 | orchestrator | 2026-04-01 02:10:28 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:10:31.074958 | orchestrator | 2026-04-01 02:10:31 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:10:31.076383 | orchestrator | 2026-04-01 02:10:31 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:10:31.076414 | orchestrator | 2026-04-01 02:10:31 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:10:34.131457 | orchestrator | 2026-04-01 02:10:34 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:10:34.134316 | orchestrator | 2026-04-01 02:10:34 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:10:34.134493 | orchestrator | 2026-04-01 02:10:34 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:10:37.181337 | orchestrator | 2026-04-01 02:10:37 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:10:37.182867 | orchestrator | 2026-04-01 02:10:37 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:10:37.183029 | orchestrator | 2026-04-01 02:10:37 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:10:40.224022 | orchestrator | 2026-04-01 02:10:40 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:10:40.225789 | orchestrator | 2026-04-01 02:10:40 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:10:40.225880 | orchestrator | 2026-04-01 02:10:40 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:10:43.273463 | orchestrator | 2026-04-01 02:10:43 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:10:43.275331 | orchestrator | 2026-04-01 02:10:43 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:10:43.275397 | orchestrator | 2026-04-01 02:10:43 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:10:46.320336 | orchestrator | 2026-04-01 02:10:46 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:10:46.320683 | orchestrator | 2026-04-01 02:10:46 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:10:46.320800 | orchestrator | 2026-04-01 02:10:46 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:10:49.369947 | orchestrator | 2026-04-01 02:10:49 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:10:49.375469 | orchestrator | 2026-04-01 02:10:49 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:10:49.375535 | orchestrator | 2026-04-01 02:10:49 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:10:52.424398 | orchestrator | 2026-04-01 02:10:52 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:10:52.426805 | orchestrator | 2026-04-01 02:10:52 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:10:52.426956 | orchestrator | 2026-04-01 02:10:52 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:10:55.472784 | orchestrator | 2026-04-01 02:10:55 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:10:55.473879 | orchestrator | 2026-04-01 02:10:55 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:10:55.473929 | orchestrator | 2026-04-01 02:10:55 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:10:58.522148 | orchestrator | 2026-04-01 02:10:58 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:10:58.523118 | orchestrator | 2026-04-01 02:10:58 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:10:58.523219 | orchestrator | 2026-04-01 02:10:58 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:11:01.576951 | orchestrator | 2026-04-01 02:11:01 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:11:01.578674 | orchestrator | 2026-04-01 02:11:01 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:11:01.578733 | orchestrator | 2026-04-01 02:11:01 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:11:04.626909 | orchestrator | 2026-04-01 02:11:04 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:11:04.628184 | orchestrator | 2026-04-01 02:11:04 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:11:04.628235 | orchestrator | 2026-04-01 02:11:04 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:11:07.676566 | orchestrator | 2026-04-01 02:11:07 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:11:07.678453 | orchestrator | 2026-04-01 02:11:07 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:11:07.678498 | orchestrator | 2026-04-01 02:11:07 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:11:10.724420 | orchestrator | 2026-04-01 02:11:10 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:11:10.724630 | orchestrator | 2026-04-01 02:11:10 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:11:10.724652 | orchestrator | 2026-04-01 02:11:10 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:11:13.770628 | orchestrator | 2026-04-01 02:11:13 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:11:13.772218 | orchestrator | 2026-04-01 02:11:13 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:11:13.772294 | orchestrator | 2026-04-01 02:11:13 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:11:16.812931 | orchestrator | 2026-04-01 02:11:16 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:11:16.814243 | orchestrator | 2026-04-01 02:11:16 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:11:16.814785 | orchestrator | 2026-04-01 02:11:16 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:11:19.853112 | orchestrator | 2026-04-01 02:11:19 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:11:19.853326 | orchestrator | 2026-04-01 02:11:19 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:11:19.853348 | orchestrator | 2026-04-01 02:11:19 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:11:22.900813 | orchestrator | 2026-04-01 02:11:22 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:11:22.901952 | orchestrator | 2026-04-01 02:11:22 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:11:22.901990 | orchestrator | 2026-04-01 02:11:22 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:11:25.946608 | orchestrator | 2026-04-01 02:11:25 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:11:25.948519 | orchestrator | 2026-04-01 02:11:25 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:11:25.948569 | orchestrator | 2026-04-01 02:11:25 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:11:29.000243 | orchestrator | 2026-04-01 02:11:28 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:11:29.002611 | orchestrator | 2026-04-01 02:11:28 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:11:29.002985 | orchestrator | 2026-04-01 02:11:28 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:11:32.052577 | orchestrator | 2026-04-01 02:11:32 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:11:32.053787 | orchestrator | 2026-04-01 02:11:32 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:11:32.053860 | orchestrator | 2026-04-01 02:11:32 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:11:35.108122 | orchestrator | 2026-04-01 02:11:35 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:11:35.109391 | orchestrator | 2026-04-01 02:11:35 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:11:35.109420 | orchestrator | 2026-04-01 02:11:35 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:11:38.165192 | orchestrator | 2026-04-01 02:11:38 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:11:38.167079 | orchestrator | 2026-04-01 02:11:38 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:11:38.167106 | orchestrator | 2026-04-01 02:11:38 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:11:41.216629 | orchestrator | 2026-04-01 02:11:41 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:11:41.219465 | orchestrator | 2026-04-01 02:11:41 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:11:41.219577 | orchestrator | 2026-04-01 02:11:41 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:11:44.280445 | orchestrator | 2026-04-01 02:11:44 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:11:44.282230 | orchestrator | 2026-04-01 02:11:44 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:11:44.282292 | orchestrator | 2026-04-01 02:11:44 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:11:47.330478 | orchestrator | 2026-04-01 02:11:47 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:11:47.331853 | orchestrator | 2026-04-01 02:11:47 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:11:47.331935 | orchestrator | 2026-04-01 02:11:47 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:11:50.372383 | orchestrator | 2026-04-01 02:11:50 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:11:50.373341 | orchestrator | 2026-04-01 02:11:50 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:11:50.373424 | orchestrator | 2026-04-01 02:11:50 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:11:53.423610 | orchestrator | 2026-04-01 02:11:53 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:11:53.425011 | orchestrator | 2026-04-01 02:11:53 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:11:53.425191 | orchestrator | 2026-04-01 02:11:53 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:11:56.484106 | orchestrator | 2026-04-01 02:11:56 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:11:56.487900 | orchestrator | 2026-04-01 02:11:56 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:11:56.488224 | orchestrator | 2026-04-01 02:11:56 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:11:59.538732 | orchestrator | 2026-04-01 02:11:59 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:11:59.541115 | orchestrator | 2026-04-01 02:11:59 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:11:59.541201 | orchestrator | 2026-04-01 02:11:59 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:12:02.590337 | orchestrator | 2026-04-01 02:12:02 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:12:02.591723 | orchestrator | 2026-04-01 02:12:02 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:12:02.591789 | orchestrator | 2026-04-01 02:12:02 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:12:05.641596 | orchestrator | 2026-04-01 02:12:05 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:12:05.644319 | orchestrator | 2026-04-01 02:12:05 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:12:05.644388 | orchestrator | 2026-04-01 02:12:05 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:12:08.691058 | orchestrator | 2026-04-01 02:12:08 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:12:08.692282 | orchestrator | 2026-04-01 02:12:08 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:12:08.692318 | orchestrator | 2026-04-01 02:12:08 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:12:11.742837 | orchestrator | 2026-04-01 02:12:11 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:12:11.745139 | orchestrator | 2026-04-01 02:12:11 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:12:11.745361 | orchestrator | 2026-04-01 02:12:11 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:12:14.798631 | orchestrator | 2026-04-01 02:12:14 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:12:14.799833 | orchestrator | 2026-04-01 02:12:14 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:12:14.800042 | orchestrator | 2026-04-01 02:12:14 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:12:17.841408 | orchestrator | 2026-04-01 02:12:17 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:12:17.843575 | orchestrator | 2026-04-01 02:12:17 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:12:17.843670 | orchestrator | 2026-04-01 02:12:17 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:12:20.899932 | orchestrator | 2026-04-01 02:12:20 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:12:20.901103 | orchestrator | 2026-04-01 02:12:20 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:12:20.901176 | orchestrator | 2026-04-01 02:12:20 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:12:23.947478 | orchestrator | 2026-04-01 02:12:23 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:12:23.950391 | orchestrator | 2026-04-01 02:12:23 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:12:23.950475 | orchestrator | 2026-04-01 02:12:23 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:12:27.001136 | orchestrator | 2026-04-01 02:12:26 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:12:27.003684 | orchestrator | 2026-04-01 02:12:26 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:12:27.003822 | orchestrator | 2026-04-01 02:12:26 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:12:30.050101 | orchestrator | 2026-04-01 02:12:30 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:12:30.051748 | orchestrator | 2026-04-01 02:12:30 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:12:30.051803 | orchestrator | 2026-04-01 02:12:30 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:12:33.100719 | orchestrator | 2026-04-01 02:12:33 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:12:33.102443 | orchestrator | 2026-04-01 02:12:33 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:12:33.102507 | orchestrator | 2026-04-01 02:12:33 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:12:36.151932 | orchestrator | 2026-04-01 02:12:36 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:12:36.152673 | orchestrator | 2026-04-01 02:12:36 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:12:36.152706 | orchestrator | 2026-04-01 02:12:36 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:12:39.203477 | orchestrator | 2026-04-01 02:12:39 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:12:39.204772 | orchestrator | 2026-04-01 02:12:39 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:12:39.204821 | orchestrator | 2026-04-01 02:12:39 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:12:42.250886 | orchestrator | 2026-04-01 02:12:42 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:12:42.254131 | orchestrator | 2026-04-01 02:12:42 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:12:42.254295 | orchestrator | 2026-04-01 02:12:42 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:12:45.300122 | orchestrator | 2026-04-01 02:12:45 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:12:45.301401 | orchestrator | 2026-04-01 02:12:45 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:12:45.301515 | orchestrator | 2026-04-01 02:12:45 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:12:48.345864 | orchestrator | 2026-04-01 02:12:48 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:12:48.347305 | orchestrator | 2026-04-01 02:12:48 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:12:48.347360 | orchestrator | 2026-04-01 02:12:48 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:12:51.394306 | orchestrator | 2026-04-01 02:12:51 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:12:51.396745 | orchestrator | 2026-04-01 02:12:51 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:12:51.397061 | orchestrator | 2026-04-01 02:12:51 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:12:54.440664 | orchestrator | 2026-04-01 02:12:54 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:12:54.443725 | orchestrator | 2026-04-01 02:12:54 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:12:54.443777 | orchestrator | 2026-04-01 02:12:54 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:12:57.493181 | orchestrator | 2026-04-01 02:12:57 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:12:57.494554 | orchestrator | 2026-04-01 02:12:57 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:12:57.494669 | orchestrator | 2026-04-01 02:12:57 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:13:00.543658 | orchestrator | 2026-04-01 02:13:00 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:13:00.545782 | orchestrator | 2026-04-01 02:13:00 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:13:00.545834 | orchestrator | 2026-04-01 02:13:00 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:13:03.593462 | orchestrator | 2026-04-01 02:13:03 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:13:03.596060 | orchestrator | 2026-04-01 02:13:03 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:13:03.596238 | orchestrator | 2026-04-01 02:13:03 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:13:06.647807 | orchestrator | 2026-04-01 02:13:06 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:13:06.650432 | orchestrator | 2026-04-01 02:13:06 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:13:06.650505 | orchestrator | 2026-04-01 02:13:06 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:13:09.692147 | orchestrator | 2026-04-01 02:13:09 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:13:09.694710 | orchestrator | 2026-04-01 02:13:09 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:13:09.694763 | orchestrator | 2026-04-01 02:13:09 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:13:12.738849 | orchestrator | 2026-04-01 02:13:12 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:13:12.740528 | orchestrator | 2026-04-01 02:13:12 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:13:12.740589 | orchestrator | 2026-04-01 02:13:12 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:13:15.792520 | orchestrator | 2026-04-01 02:13:15 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:13:15.794918 | orchestrator | 2026-04-01 02:13:15 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:13:15.795175 | orchestrator | 2026-04-01 02:13:15 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:13:18.842699 | orchestrator | 2026-04-01 02:13:18 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:13:18.844609 | orchestrator | 2026-04-01 02:13:18 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:13:18.845295 | orchestrator | 2026-04-01 02:13:18 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:13:21.894593 | orchestrator | 2026-04-01 02:13:21 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:13:21.896387 | orchestrator | 2026-04-01 02:13:21 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:13:21.896441 | orchestrator | 2026-04-01 02:13:21 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:13:24.943850 | orchestrator | 2026-04-01 02:13:24 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:13:24.945732 | orchestrator | 2026-04-01 02:13:24 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:13:24.945790 | orchestrator | 2026-04-01 02:13:24 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:13:27.988687 | orchestrator | 2026-04-01 02:13:27 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:13:27.990639 | orchestrator | 2026-04-01 02:13:27 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:13:27.990763 | orchestrator | 2026-04-01 02:13:27 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:13:31.037844 | orchestrator | 2026-04-01 02:13:31 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:13:31.039652 | orchestrator | 2026-04-01 02:13:31 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:13:31.039767 | orchestrator | 2026-04-01 02:13:31 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:13:34.085116 | orchestrator | 2026-04-01 02:13:34 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:13:34.086201 | orchestrator | 2026-04-01 02:13:34 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:13:34.086232 | orchestrator | 2026-04-01 02:13:34 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:13:37.131802 | orchestrator | 2026-04-01 02:13:37 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:13:37.134268 | orchestrator | 2026-04-01 02:13:37 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:13:37.134656 | orchestrator | 2026-04-01 02:13:37 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:13:40.183591 | orchestrator | 2026-04-01 02:13:40 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:13:40.185072 | orchestrator | 2026-04-01 02:13:40 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:13:40.185160 | orchestrator | 2026-04-01 02:13:40 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:13:43.233468 | orchestrator | 2026-04-01 02:13:43 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:13:43.234855 | orchestrator | 2026-04-01 02:13:43 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:13:43.234931 | orchestrator | 2026-04-01 02:13:43 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:13:46.282371 | orchestrator | 2026-04-01 02:13:46 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:13:46.284943 | orchestrator | 2026-04-01 02:13:46 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:13:46.285009 | orchestrator | 2026-04-01 02:13:46 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:13:49.334581 | orchestrator | 2026-04-01 02:13:49 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:13:49.335933 | orchestrator | 2026-04-01 02:13:49 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:13:49.335984 | orchestrator | 2026-04-01 02:13:49 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:13:52.384899 | orchestrator | 2026-04-01 02:13:52 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:13:52.387050 | orchestrator | 2026-04-01 02:13:52 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:13:52.387164 | orchestrator | 2026-04-01 02:13:52 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:13:55.440834 | orchestrator | 2026-04-01 02:13:55 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:13:55.442427 | orchestrator | 2026-04-01 02:13:55 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:13:55.442473 | orchestrator | 2026-04-01 02:13:55 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:13:58.488100 | orchestrator | 2026-04-01 02:13:58 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:13:58.489646 | orchestrator | 2026-04-01 02:13:58 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:13:58.489784 | orchestrator | 2026-04-01 02:13:58 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:14:01.533875 | orchestrator | 2026-04-01 02:14:01 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:14:01.535320 | orchestrator | 2026-04-01 02:14:01 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:14:01.535364 | orchestrator | 2026-04-01 02:14:01 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:14:04.578364 | orchestrator | 2026-04-01 02:14:04 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:14:04.581360 | orchestrator | 2026-04-01 02:14:04 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:14:04.581460 | orchestrator | 2026-04-01 02:14:04 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:14:07.630452 | orchestrator | 2026-04-01 02:14:07 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:14:07.632760 | orchestrator | 2026-04-01 02:14:07 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:14:07.632846 | orchestrator | 2026-04-01 02:14:07 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:14:10.681577 | orchestrator | 2026-04-01 02:14:10 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:14:10.683633 | orchestrator | 2026-04-01 02:14:10 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:14:10.683742 | orchestrator | 2026-04-01 02:14:10 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:14:13.724667 | orchestrator | 2026-04-01 02:14:13 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:14:13.726154 | orchestrator | 2026-04-01 02:14:13 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:14:13.726224 | orchestrator | 2026-04-01 02:14:13 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:14:16.768540 | orchestrator | 2026-04-01 02:14:16 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:14:16.769941 | orchestrator | 2026-04-01 02:14:16 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:14:16.770092 | orchestrator | 2026-04-01 02:14:16 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:14:19.814646 | orchestrator | 2026-04-01 02:14:19 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:14:19.816066 | orchestrator | 2026-04-01 02:14:19 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:14:19.816251 | orchestrator | 2026-04-01 02:14:19 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:14:22.859802 | orchestrator | 2026-04-01 02:14:22 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:14:22.861938 | orchestrator | 2026-04-01 02:14:22 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:14:22.862137 | orchestrator | 2026-04-01 02:14:22 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:14:25.910620 | orchestrator | 2026-04-01 02:14:25 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:14:25.912061 | orchestrator | 2026-04-01 02:14:25 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:14:25.912229 | orchestrator | 2026-04-01 02:14:25 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:14:28.961342 | orchestrator | 2026-04-01 02:14:28 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:14:28.963498 | orchestrator | 2026-04-01 02:14:28 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:14:28.963539 | orchestrator | 2026-04-01 02:14:28 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:14:32.011314 | orchestrator | 2026-04-01 02:14:32 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:14:32.013343 | orchestrator | 2026-04-01 02:14:32 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:14:32.013411 | orchestrator | 2026-04-01 02:14:32 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:14:35.059649 | orchestrator | 2026-04-01 02:14:35 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:14:35.061813 | orchestrator | 2026-04-01 02:14:35 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:14:35.061940 | orchestrator | 2026-04-01 02:14:35 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:14:38.109045 | orchestrator | 2026-04-01 02:14:38 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:14:38.111443 | orchestrator | 2026-04-01 02:14:38 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:14:38.111557 | orchestrator | 2026-04-01 02:14:38 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:14:41.160169 | orchestrator | 2026-04-01 02:14:41 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:14:41.161643 | orchestrator | 2026-04-01 02:14:41 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:14:41.161750 | orchestrator | 2026-04-01 02:14:41 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:14:44.207012 | orchestrator | 2026-04-01 02:14:44 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:14:44.208435 | orchestrator | 2026-04-01 02:14:44 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:14:44.208664 | orchestrator | 2026-04-01 02:14:44 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:14:47.259098 | orchestrator | 2026-04-01 02:14:47 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:14:47.261226 | orchestrator | 2026-04-01 02:14:47 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:14:47.261285 | orchestrator | 2026-04-01 02:14:47 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:14:50.307014 | orchestrator | 2026-04-01 02:14:50 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:14:50.308468 | orchestrator | 2026-04-01 02:14:50 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:14:50.308506 | orchestrator | 2026-04-01 02:14:50 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:14:53.352161 | orchestrator | 2026-04-01 02:14:53 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:14:53.353865 | orchestrator | 2026-04-01 02:14:53 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:14:53.353997 | orchestrator | 2026-04-01 02:14:53 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:14:56.400245 | orchestrator | 2026-04-01 02:14:56 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:14:56.402751 | orchestrator | 2026-04-01 02:14:56 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:14:56.402818 | orchestrator | 2026-04-01 02:14:56 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:14:59.452617 | orchestrator | 2026-04-01 02:14:59 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:14:59.454409 | orchestrator | 2026-04-01 02:14:59 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:14:59.454502 | orchestrator | 2026-04-01 02:14:59 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:15:02.499420 | orchestrator | 2026-04-01 02:15:02 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:15:02.502478 | orchestrator | 2026-04-01 02:15:02 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:15:02.502551 | orchestrator | 2026-04-01 02:15:02 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:15:05.548970 | orchestrator | 2026-04-01 02:15:05 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:15:05.552570 | orchestrator | 2026-04-01 02:15:05 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:15:05.552655 | orchestrator | 2026-04-01 02:15:05 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:15:08.599709 | orchestrator | 2026-04-01 02:15:08 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:15:08.601414 | orchestrator | 2026-04-01 02:15:08 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:15:08.601431 | orchestrator | 2026-04-01 02:15:08 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:15:11.643370 | orchestrator | 2026-04-01 02:15:11 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:15:11.645462 | orchestrator | 2026-04-01 02:15:11 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:15:11.645690 | orchestrator | 2026-04-01 02:15:11 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:15:14.696876 | orchestrator | 2026-04-01 02:15:14 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:15:14.698495 | orchestrator | 2026-04-01 02:15:14 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:15:14.698736 | orchestrator | 2026-04-01 02:15:14 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:15:17.742354 | orchestrator | 2026-04-01 02:15:17 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:15:17.742858 | orchestrator | 2026-04-01 02:15:17 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:15:17.742891 | orchestrator | 2026-04-01 02:15:17 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:15:20.786312 | orchestrator | 2026-04-01 02:15:20 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:15:20.787173 | orchestrator | 2026-04-01 02:15:20 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:15:20.787205 | orchestrator | 2026-04-01 02:15:20 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:15:23.831803 | orchestrator | 2026-04-01 02:15:23 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:15:23.833085 | orchestrator | 2026-04-01 02:15:23 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:15:23.833298 | orchestrator | 2026-04-01 02:15:23 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:15:26.879006 | orchestrator | 2026-04-01 02:15:26 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:15:26.880904 | orchestrator | 2026-04-01 02:15:26 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:15:26.881094 | orchestrator | 2026-04-01 02:15:26 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:15:29.933911 | orchestrator | 2026-04-01 02:15:29 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:15:29.935984 | orchestrator | 2026-04-01 02:15:29 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:15:29.936175 | orchestrator | 2026-04-01 02:15:29 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:15:32.978696 | orchestrator | 2026-04-01 02:15:32 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:15:32.979105 | orchestrator | 2026-04-01 02:15:32 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:15:32.979139 | orchestrator | 2026-04-01 02:15:32 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:15:36.025758 | orchestrator | 2026-04-01 02:15:36 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:15:36.028169 | orchestrator | 2026-04-01 02:15:36 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:15:36.028241 | orchestrator | 2026-04-01 02:15:36 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:15:39.071369 | orchestrator | 2026-04-01 02:15:39 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:15:39.072818 | orchestrator | 2026-04-01 02:15:39 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:15:39.072871 | orchestrator | 2026-04-01 02:15:39 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:15:42.121672 | orchestrator | 2026-04-01 02:15:42 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:15:42.123178 | orchestrator | 2026-04-01 02:15:42 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:15:42.123251 | orchestrator | 2026-04-01 02:15:42 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:15:45.164265 | orchestrator | 2026-04-01 02:15:45 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:15:45.165716 | orchestrator | 2026-04-01 02:15:45 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:15:45.165863 | orchestrator | 2026-04-01 02:15:45 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:15:48.213899 | orchestrator | 2026-04-01 02:15:48 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:15:48.215099 | orchestrator | 2026-04-01 02:15:48 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:15:48.215207 | orchestrator | 2026-04-01 02:15:48 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:15:51.264656 | orchestrator | 2026-04-01 02:15:51 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:15:51.266723 | orchestrator | 2026-04-01 02:15:51 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:15:51.266755 | orchestrator | 2026-04-01 02:15:51 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:15:54.308166 | orchestrator | 2026-04-01 02:15:54 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:15:54.310327 | orchestrator | 2026-04-01 02:15:54 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:15:54.310402 | orchestrator | 2026-04-01 02:15:54 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:15:57.362933 | orchestrator | 2026-04-01 02:15:57 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:15:57.364485 | orchestrator | 2026-04-01 02:15:57 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:15:57.364523 | orchestrator | 2026-04-01 02:15:57 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:16:00.412801 | orchestrator | 2026-04-01 02:16:00 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:16:00.414722 | orchestrator | 2026-04-01 02:16:00 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:16:00.414804 | orchestrator | 2026-04-01 02:16:00 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:16:03.462683 | orchestrator | 2026-04-01 02:16:03 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:16:03.464808 | orchestrator | 2026-04-01 02:16:03 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:16:03.464967 | orchestrator | 2026-04-01 02:16:03 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:16:06.511398 | orchestrator | 2026-04-01 02:16:06 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:16:06.512932 | orchestrator | 2026-04-01 02:16:06 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:16:06.513043 | orchestrator | 2026-04-01 02:16:06 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:16:09.561026 | orchestrator | 2026-04-01 02:16:09 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:16:09.562307 | orchestrator | 2026-04-01 02:16:09 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:16:09.562454 | orchestrator | 2026-04-01 02:16:09 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:16:12.605537 | orchestrator | 2026-04-01 02:16:12 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:16:12.606821 | orchestrator | 2026-04-01 02:16:12 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:16:12.606892 | orchestrator | 2026-04-01 02:16:12 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:16:15.653044 | orchestrator | 2026-04-01 02:16:15 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:16:15.653156 | orchestrator | 2026-04-01 02:16:15 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:16:15.653174 | orchestrator | 2026-04-01 02:16:15 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:16:18.699371 | orchestrator | 2026-04-01 02:16:18 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:16:18.701043 | orchestrator | 2026-04-01 02:16:18 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:16:18.701243 | orchestrator | 2026-04-01 02:16:18 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:16:21.746283 | orchestrator | 2026-04-01 02:16:21 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:16:21.748388 | orchestrator | 2026-04-01 02:16:21 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:16:21.748624 | orchestrator | 2026-04-01 02:16:21 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:16:24.793485 | orchestrator | 2026-04-01 02:16:24 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:16:24.795149 | orchestrator | 2026-04-01 02:16:24 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:16:24.795204 | orchestrator | 2026-04-01 02:16:24 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:16:27.850945 | orchestrator | 2026-04-01 02:16:27 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:16:27.853135 | orchestrator | 2026-04-01 02:16:27 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:16:27.853258 | orchestrator | 2026-04-01 02:16:27 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:16:30.898657 | orchestrator | 2026-04-01 02:16:30 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:16:30.900131 | orchestrator | 2026-04-01 02:16:30 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:16:30.900188 | orchestrator | 2026-04-01 02:16:30 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:16:33.945105 | orchestrator | 2026-04-01 02:16:33 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:16:33.946902 | orchestrator | 2026-04-01 02:16:33 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:16:33.946967 | orchestrator | 2026-04-01 02:16:33 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:16:36.992854 | orchestrator | 2026-04-01 02:16:36 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:16:36.993773 | orchestrator | 2026-04-01 02:16:36 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:16:36.993835 | orchestrator | 2026-04-01 02:16:36 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:16:40.039421 | orchestrator | 2026-04-01 02:16:40 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:16:40.040088 | orchestrator | 2026-04-01 02:16:40 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:16:40.040182 | orchestrator | 2026-04-01 02:16:40 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:16:43.085082 | orchestrator | 2026-04-01 02:16:43 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:16:43.086243 | orchestrator | 2026-04-01 02:16:43 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:16:43.086317 | orchestrator | 2026-04-01 02:16:43 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:16:46.140196 | orchestrator | 2026-04-01 02:16:46 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:16:46.143916 | orchestrator | 2026-04-01 02:16:46 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:16:46.143987 | orchestrator | 2026-04-01 02:16:46 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:16:49.191338 | orchestrator | 2026-04-01 02:16:49 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:16:49.192924 | orchestrator | 2026-04-01 02:16:49 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:16:49.193039 | orchestrator | 2026-04-01 02:16:49 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:16:52.235765 | orchestrator | 2026-04-01 02:16:52 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:16:52.237628 | orchestrator | 2026-04-01 02:16:52 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:16:52.237677 | orchestrator | 2026-04-01 02:16:52 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:16:55.280901 | orchestrator | 2026-04-01 02:16:55 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:16:55.281877 | orchestrator | 2026-04-01 02:16:55 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:16:55.282000 | orchestrator | 2026-04-01 02:16:55 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:16:58.326982 | orchestrator | 2026-04-01 02:16:58 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:16:58.329255 | orchestrator | 2026-04-01 02:16:58 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:16:58.329470 | orchestrator | 2026-04-01 02:16:58 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:17:01.375079 | orchestrator | 2026-04-01 02:17:01 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:17:01.378001 | orchestrator | 2026-04-01 02:17:01 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:17:01.378376 | orchestrator | 2026-04-01 02:17:01 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:17:04.427101 | orchestrator | 2026-04-01 02:17:04 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:17:04.429938 | orchestrator | 2026-04-01 02:17:04 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:17:04.430160 | orchestrator | 2026-04-01 02:17:04 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:17:07.471939 | orchestrator | 2026-04-01 02:17:07 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:17:07.473635 | orchestrator | 2026-04-01 02:17:07 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:17:07.473675 | orchestrator | 2026-04-01 02:17:07 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:17:10.521963 | orchestrator | 2026-04-01 02:17:10 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:17:10.523621 | orchestrator | 2026-04-01 02:17:10 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:17:10.523677 | orchestrator | 2026-04-01 02:17:10 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:17:13.572613 | orchestrator | 2026-04-01 02:17:13 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:17:13.574141 | orchestrator | 2026-04-01 02:17:13 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:17:13.574221 | orchestrator | 2026-04-01 02:17:13 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:17:16.616286 | orchestrator | 2026-04-01 02:17:16 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:17:16.617277 | orchestrator | 2026-04-01 02:17:16 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:17:16.617338 | orchestrator | 2026-04-01 02:17:16 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:17:19.661945 | orchestrator | 2026-04-01 02:17:19 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:17:19.662883 | orchestrator | 2026-04-01 02:17:19 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:17:19.662929 | orchestrator | 2026-04-01 02:17:19 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:17:22.714968 | orchestrator | 2026-04-01 02:17:22 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:17:22.716807 | orchestrator | 2026-04-01 02:17:22 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:17:22.716883 | orchestrator | 2026-04-01 02:17:22 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:17:25.760522 | orchestrator | 2026-04-01 02:17:25 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:17:25.761828 | orchestrator | 2026-04-01 02:17:25 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:17:25.762113 | orchestrator | 2026-04-01 02:17:25 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:17:28.808528 | orchestrator | 2026-04-01 02:17:28 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:17:28.810228 | orchestrator | 2026-04-01 02:17:28 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:17:28.810383 | orchestrator | 2026-04-01 02:17:28 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:17:31.852027 | orchestrator | 2026-04-01 02:17:31 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:17:31.854215 | orchestrator | 2026-04-01 02:17:31 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:17:31.854302 | orchestrator | 2026-04-01 02:17:31 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:17:34.904302 | orchestrator | 2026-04-01 02:17:34 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:17:34.906329 | orchestrator | 2026-04-01 02:17:34 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:17:34.906397 | orchestrator | 2026-04-01 02:17:34 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:17:37.955875 | orchestrator | 2026-04-01 02:17:37 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:17:37.956803 | orchestrator | 2026-04-01 02:17:37 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:17:37.956957 | orchestrator | 2026-04-01 02:17:37 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:17:41.003746 | orchestrator | 2026-04-01 02:17:40 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:17:41.006674 | orchestrator | 2026-04-01 02:17:40 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:17:41.006775 | orchestrator | 2026-04-01 02:17:40 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:17:44.054754 | orchestrator | 2026-04-01 02:17:44 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:17:44.056276 | orchestrator | 2026-04-01 02:17:44 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:17:44.056349 | orchestrator | 2026-04-01 02:17:44 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:17:47.103104 | orchestrator | 2026-04-01 02:17:47 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:17:47.106194 | orchestrator | 2026-04-01 02:17:47 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:17:47.106251 | orchestrator | 2026-04-01 02:17:47 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:17:50.143101 | orchestrator | 2026-04-01 02:17:50 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:17:50.144959 | orchestrator | 2026-04-01 02:17:50 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:17:50.145116 | orchestrator | 2026-04-01 02:17:50 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:17:53.194399 | orchestrator | 2026-04-01 02:17:53 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:17:53.196518 | orchestrator | 2026-04-01 02:17:53 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:17:53.196555 | orchestrator | 2026-04-01 02:17:53 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:17:56.250275 | orchestrator | 2026-04-01 02:17:56 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:17:56.252779 | orchestrator | 2026-04-01 02:17:56 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:17:56.252862 | orchestrator | 2026-04-01 02:17:56 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:17:59.298319 | orchestrator | 2026-04-01 02:17:59 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:17:59.299936 | orchestrator | 2026-04-01 02:17:59 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:17:59.300157 | orchestrator | 2026-04-01 02:17:59 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:18:02.342160 | orchestrator | 2026-04-01 02:18:02 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:18:02.343409 | orchestrator | 2026-04-01 02:18:02 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:18:02.343525 | orchestrator | 2026-04-01 02:18:02 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:18:05.390269 | orchestrator | 2026-04-01 02:18:05 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:18:05.392312 | orchestrator | 2026-04-01 02:18:05 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:18:05.392402 | orchestrator | 2026-04-01 02:18:05 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:18:08.440668 | orchestrator | 2026-04-01 02:18:08 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:18:08.443064 | orchestrator | 2026-04-01 02:18:08 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:18:08.443128 | orchestrator | 2026-04-01 02:18:08 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:18:11.492290 | orchestrator | 2026-04-01 02:18:11 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:18:11.494279 | orchestrator | 2026-04-01 02:18:11 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:18:11.494373 | orchestrator | 2026-04-01 02:18:11 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:18:14.537630 | orchestrator | 2026-04-01 02:18:14 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:18:14.538817 | orchestrator | 2026-04-01 02:18:14 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:18:14.538860 | orchestrator | 2026-04-01 02:18:14 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:18:17.585300 | orchestrator | 2026-04-01 02:18:17 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:18:17.587197 | orchestrator | 2026-04-01 02:18:17 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:18:17.587289 | orchestrator | 2026-04-01 02:18:17 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:18:20.628072 | orchestrator | 2026-04-01 02:18:20 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:18:20.628810 | orchestrator | 2026-04-01 02:18:20 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:18:20.628858 | orchestrator | 2026-04-01 02:18:20 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:18:23.677656 | orchestrator | 2026-04-01 02:18:23 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:18:23.680244 | orchestrator | 2026-04-01 02:18:23 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:18:23.680459 | orchestrator | 2026-04-01 02:18:23 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:18:26.728642 | orchestrator | 2026-04-01 02:18:26 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:18:26.730247 | orchestrator | 2026-04-01 02:18:26 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:18:26.730293 | orchestrator | 2026-04-01 02:18:26 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:18:29.785897 | orchestrator | 2026-04-01 02:18:29 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:18:29.787307 | orchestrator | 2026-04-01 02:18:29 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:18:29.787405 | orchestrator | 2026-04-01 02:18:29 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:18:32.836668 | orchestrator | 2026-04-01 02:18:32 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:18:32.839487 | orchestrator | 2026-04-01 02:18:32 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:18:32.839660 | orchestrator | 2026-04-01 02:18:32 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:18:35.886769 | orchestrator | 2026-04-01 02:18:35 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:18:35.888005 | orchestrator | 2026-04-01 02:18:35 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:18:35.888239 | orchestrator | 2026-04-01 02:18:35 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:18:38.936556 | orchestrator | 2026-04-01 02:18:38 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:18:38.937330 | orchestrator | 2026-04-01 02:18:38 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:18:38.937345 | orchestrator | 2026-04-01 02:18:38 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:18:41.985098 | orchestrator | 2026-04-01 02:18:41 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:18:41.986942 | orchestrator | 2026-04-01 02:18:41 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:18:41.987096 | orchestrator | 2026-04-01 02:18:41 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:18:45.035881 | orchestrator | 2026-04-01 02:18:45 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:18:45.038360 | orchestrator | 2026-04-01 02:18:45 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:18:45.038421 | orchestrator | 2026-04-01 02:18:45 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:18:48.087118 | orchestrator | 2026-04-01 02:18:48 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:18:48.089069 | orchestrator | 2026-04-01 02:18:48 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:18:48.089140 | orchestrator | 2026-04-01 02:18:48 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:18:51.131409 | orchestrator | 2026-04-01 02:18:51 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:18:51.133914 | orchestrator | 2026-04-01 02:18:51 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:18:51.134223 | orchestrator | 2026-04-01 02:18:51 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:18:54.182455 | orchestrator | 2026-04-01 02:18:54 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:18:54.184082 | orchestrator | 2026-04-01 02:18:54 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:18:54.184146 | orchestrator | 2026-04-01 02:18:54 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:18:57.237834 | orchestrator | 2026-04-01 02:18:57 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:18:57.240544 | orchestrator | 2026-04-01 02:18:57 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:18:57.241064 | orchestrator | 2026-04-01 02:18:57 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:19:00.283382 | orchestrator | 2026-04-01 02:19:00 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:19:00.285058 | orchestrator | 2026-04-01 02:19:00 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:19:00.285166 | orchestrator | 2026-04-01 02:19:00 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:19:03.334118 | orchestrator | 2026-04-01 02:19:03 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:19:03.335092 | orchestrator | 2026-04-01 02:19:03 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:19:03.335128 | orchestrator | 2026-04-01 02:19:03 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:19:06.379663 | orchestrator | 2026-04-01 02:19:06 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:19:06.381357 | orchestrator | 2026-04-01 02:19:06 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:19:06.381392 | orchestrator | 2026-04-01 02:19:06 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:19:09.426686 | orchestrator | 2026-04-01 02:19:09 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:19:09.428178 | orchestrator | 2026-04-01 02:19:09 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:19:09.428229 | orchestrator | 2026-04-01 02:19:09 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:19:12.467682 | orchestrator | 2026-04-01 02:19:12 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:19:12.470323 | orchestrator | 2026-04-01 02:19:12 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:19:12.470385 | orchestrator | 2026-04-01 02:19:12 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:19:15.516498 | orchestrator | 2026-04-01 02:19:15 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:19:15.517519 | orchestrator | 2026-04-01 02:19:15 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:19:15.517728 | orchestrator | 2026-04-01 02:19:15 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:19:18.563116 | orchestrator | 2026-04-01 02:19:18 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:19:18.565009 | orchestrator | 2026-04-01 02:19:18 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:19:18.565052 | orchestrator | 2026-04-01 02:19:18 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:19:21.604103 | orchestrator | 2026-04-01 02:19:21 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:19:21.605749 | orchestrator | 2026-04-01 02:19:21 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:19:21.605790 | orchestrator | 2026-04-01 02:19:21 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:19:24.647289 | orchestrator | 2026-04-01 02:19:24 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:19:24.648376 | orchestrator | 2026-04-01 02:19:24 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:19:24.648429 | orchestrator | 2026-04-01 02:19:24 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:19:27.698569 | orchestrator | 2026-04-01 02:19:27 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:19:27.700405 | orchestrator | 2026-04-01 02:19:27 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:19:27.700464 | orchestrator | 2026-04-01 02:19:27 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:19:30.751878 | orchestrator | 2026-04-01 02:19:30 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:19:30.753758 | orchestrator | 2026-04-01 02:19:30 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:19:30.753847 | orchestrator | 2026-04-01 02:19:30 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:19:33.807190 | orchestrator | 2026-04-01 02:19:33 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:19:33.808907 | orchestrator | 2026-04-01 02:19:33 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:19:33.808990 | orchestrator | 2026-04-01 02:19:33 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:19:36.859391 | orchestrator | 2026-04-01 02:19:36 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:19:36.859498 | orchestrator | 2026-04-01 02:19:36 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:19:36.859592 | orchestrator | 2026-04-01 02:19:36 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:19:39.902450 | orchestrator | 2026-04-01 02:19:39 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:19:39.903887 | orchestrator | 2026-04-01 02:19:39 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:19:39.903935 | orchestrator | 2026-04-01 02:19:39 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:19:42.950566 | orchestrator | 2026-04-01 02:19:42 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:19:42.953539 | orchestrator | 2026-04-01 02:19:42 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:19:42.953615 | orchestrator | 2026-04-01 02:19:42 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:19:46.001683 | orchestrator | 2026-04-01 02:19:45 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:19:46.003252 | orchestrator | 2026-04-01 02:19:45 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:19:46.003480 | orchestrator | 2026-04-01 02:19:45 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:19:49.053155 | orchestrator | 2026-04-01 02:19:49 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:19:49.055169 | orchestrator | 2026-04-01 02:19:49 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:19:49.055241 | orchestrator | 2026-04-01 02:19:49 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:19:52.097867 | orchestrator | 2026-04-01 02:19:52 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:19:52.099635 | orchestrator | 2026-04-01 02:19:52 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:19:52.099734 | orchestrator | 2026-04-01 02:19:52 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:19:55.145695 | orchestrator | 2026-04-01 02:19:55 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:19:55.147175 | orchestrator | 2026-04-01 02:19:55 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:19:55.147233 | orchestrator | 2026-04-01 02:19:55 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:19:58.192761 | orchestrator | 2026-04-01 02:19:58 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:19:58.194280 | orchestrator | 2026-04-01 02:19:58 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:19:58.194318 | orchestrator | 2026-04-01 02:19:58 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:20:01.241620 | orchestrator | 2026-04-01 02:20:01 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:20:01.242799 | orchestrator | 2026-04-01 02:20:01 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:20:01.242849 | orchestrator | 2026-04-01 02:20:01 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:20:04.299757 | orchestrator | 2026-04-01 02:20:04 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:20:04.301875 | orchestrator | 2026-04-01 02:20:04 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:20:04.301921 | orchestrator | 2026-04-01 02:20:04 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:20:07.344285 | orchestrator | 2026-04-01 02:20:07 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:20:07.345474 | orchestrator | 2026-04-01 02:20:07 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:20:07.345791 | orchestrator | 2026-04-01 02:20:07 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:20:10.386640 | orchestrator | 2026-04-01 02:20:10 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:20:10.388323 | orchestrator | 2026-04-01 02:20:10 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:20:10.388515 | orchestrator | 2026-04-01 02:20:10 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:20:13.435645 | orchestrator | 2026-04-01 02:20:13 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:20:13.436765 | orchestrator | 2026-04-01 02:20:13 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:20:13.436843 | orchestrator | 2026-04-01 02:20:13 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:20:16.486447 | orchestrator | 2026-04-01 02:20:16 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:20:16.489511 | orchestrator | 2026-04-01 02:20:16 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:20:16.489559 | orchestrator | 2026-04-01 02:20:16 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:20:19.531447 | orchestrator | 2026-04-01 02:20:19 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:20:19.532851 | orchestrator | 2026-04-01 02:20:19 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:20:19.532923 | orchestrator | 2026-04-01 02:20:19 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:20:22.577412 | orchestrator | 2026-04-01 02:20:22 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:20:22.578917 | orchestrator | 2026-04-01 02:20:22 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:20:22.578990 | orchestrator | 2026-04-01 02:20:22 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:20:25.627568 | orchestrator | 2026-04-01 02:20:25 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:20:25.629546 | orchestrator | 2026-04-01 02:20:25 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:20:25.629622 | orchestrator | 2026-04-01 02:20:25 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:20:28.678683 | orchestrator | 2026-04-01 02:20:28 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:20:28.681255 | orchestrator | 2026-04-01 02:20:28 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:20:28.681302 | orchestrator | 2026-04-01 02:20:28 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:20:31.733832 | orchestrator | 2026-04-01 02:20:31 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:20:31.735930 | orchestrator | 2026-04-01 02:20:31 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:20:31.736004 | orchestrator | 2026-04-01 02:20:31 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:20:34.778647 | orchestrator | 2026-04-01 02:20:34 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:20:34.779707 | orchestrator | 2026-04-01 02:20:34 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:20:34.779862 | orchestrator | 2026-04-01 02:20:34 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:20:37.825510 | orchestrator | 2026-04-01 02:20:37 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:20:37.828800 | orchestrator | 2026-04-01 02:20:37 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:20:37.828889 | orchestrator | 2026-04-01 02:20:37 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:20:40.872635 | orchestrator | 2026-04-01 02:20:40 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:20:40.873595 | orchestrator | 2026-04-01 02:20:40 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:20:40.873628 | orchestrator | 2026-04-01 02:20:40 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:20:43.916493 | orchestrator | 2026-04-01 02:20:43 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:20:43.917674 | orchestrator | 2026-04-01 02:20:43 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:20:43.917704 | orchestrator | 2026-04-01 02:20:43 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:20:46.954607 | orchestrator | 2026-04-01 02:20:46 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:20:46.955250 | orchestrator | 2026-04-01 02:20:46 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:20:46.955288 | orchestrator | 2026-04-01 02:20:46 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:20:50.003569 | orchestrator | 2026-04-01 02:20:49 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:20:50.004154 | orchestrator | 2026-04-01 02:20:49 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:20:50.004187 | orchestrator | 2026-04-01 02:20:49 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:20:53.051431 | orchestrator | 2026-04-01 02:20:53 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:20:53.052577 | orchestrator | 2026-04-01 02:20:53 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:20:53.052638 | orchestrator | 2026-04-01 02:20:53 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:20:56.094505 | orchestrator | 2026-04-01 02:20:56 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:20:56.096473 | orchestrator | 2026-04-01 02:20:56 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:20:56.096569 | orchestrator | 2026-04-01 02:20:56 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:20:59.141852 | orchestrator | 2026-04-01 02:20:59 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:20:59.143274 | orchestrator | 2026-04-01 02:20:59 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:20:59.143551 | orchestrator | 2026-04-01 02:20:59 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:21:02.182416 | orchestrator | 2026-04-01 02:21:02 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:21:02.183723 | orchestrator | 2026-04-01 02:21:02 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:21:02.183874 | orchestrator | 2026-04-01 02:21:02 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:21:05.222503 | orchestrator | 2026-04-01 02:21:05 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:21:05.223928 | orchestrator | 2026-04-01 02:21:05 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:21:05.224050 | orchestrator | 2026-04-01 02:21:05 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:21:08.275871 | orchestrator | 2026-04-01 02:21:08 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:21:08.277269 | orchestrator | 2026-04-01 02:21:08 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:21:08.277344 | orchestrator | 2026-04-01 02:21:08 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:21:11.314094 | orchestrator | 2026-04-01 02:21:11 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:21:11.315868 | orchestrator | 2026-04-01 02:21:11 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:21:11.315981 | orchestrator | 2026-04-01 02:21:11 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:21:14.358549 | orchestrator | 2026-04-01 02:21:14 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:21:14.360324 | orchestrator | 2026-04-01 02:21:14 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:21:14.360448 | orchestrator | 2026-04-01 02:21:14 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:21:17.409750 | orchestrator | 2026-04-01 02:21:17 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:21:17.410695 | orchestrator | 2026-04-01 02:21:17 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:21:17.410721 | orchestrator | 2026-04-01 02:21:17 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:21:20.453927 | orchestrator | 2026-04-01 02:21:20 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:21:20.454387 | orchestrator | 2026-04-01 02:21:20 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:21:20.454411 | orchestrator | 2026-04-01 02:21:20 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:21:23.500102 | orchestrator | 2026-04-01 02:21:23 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:21:23.500927 | orchestrator | 2026-04-01 02:21:23 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:21:23.500957 | orchestrator | 2026-04-01 02:21:23 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:21:26.544706 | orchestrator | 2026-04-01 02:21:26 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:21:26.545770 | orchestrator | 2026-04-01 02:21:26 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:21:26.545838 | orchestrator | 2026-04-01 02:21:26 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:21:29.586874 | orchestrator | 2026-04-01 02:21:29 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:21:29.588087 | orchestrator | 2026-04-01 02:21:29 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:21:29.588186 | orchestrator | 2026-04-01 02:21:29 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:21:32.634321 | orchestrator | 2026-04-01 02:21:32 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:21:32.635360 | orchestrator | 2026-04-01 02:21:32 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:21:32.635409 | orchestrator | 2026-04-01 02:21:32 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:21:35.683192 | orchestrator | 2026-04-01 02:21:35 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:21:35.686095 | orchestrator | 2026-04-01 02:21:35 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:21:35.686208 | orchestrator | 2026-04-01 02:21:35 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:21:38.741026 | orchestrator | 2026-04-01 02:21:38 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:21:38.742333 | orchestrator | 2026-04-01 02:21:38 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:21:38.742581 | orchestrator | 2026-04-01 02:21:38 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:21:41.794418 | orchestrator | 2026-04-01 02:21:41 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:21:41.796032 | orchestrator | 2026-04-01 02:21:41 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:21:41.796107 | orchestrator | 2026-04-01 02:21:41 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:21:44.843795 | orchestrator | 2026-04-01 02:21:44 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:21:44.847271 | orchestrator | 2026-04-01 02:21:44 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:21:44.847334 | orchestrator | 2026-04-01 02:21:44 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:21:47.893668 | orchestrator | 2026-04-01 02:21:47 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:21:47.897199 | orchestrator | 2026-04-01 02:21:47 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:21:47.897285 | orchestrator | 2026-04-01 02:21:47 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:21:50.944950 | orchestrator | 2026-04-01 02:21:50 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:21:50.945100 | orchestrator | 2026-04-01 02:21:50 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:21:50.945118 | orchestrator | 2026-04-01 02:21:50 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:21:53.999771 | orchestrator | 2026-04-01 02:21:53 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:21:54.001986 | orchestrator | 2026-04-01 02:21:53 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:21:54.002085 | orchestrator | 2026-04-01 02:21:53 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:21:57.043695 | orchestrator | 2026-04-01 02:21:57 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:21:57.045794 | orchestrator | 2026-04-01 02:21:57 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:21:57.045867 | orchestrator | 2026-04-01 02:21:57 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:22:00.086099 | orchestrator | 2026-04-01 02:22:00 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:22:00.087220 | orchestrator | 2026-04-01 02:22:00 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:22:00.087502 | orchestrator | 2026-04-01 02:22:00 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:22:03.133284 | orchestrator | 2026-04-01 02:22:03 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:22:03.135397 | orchestrator | 2026-04-01 02:22:03 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:22:03.135440 | orchestrator | 2026-04-01 02:22:03 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:22:06.180240 | orchestrator | 2026-04-01 02:22:06 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:22:06.182164 | orchestrator | 2026-04-01 02:22:06 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:22:06.182216 | orchestrator | 2026-04-01 02:22:06 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:22:09.230207 | orchestrator | 2026-04-01 02:22:09 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:22:09.231965 | orchestrator | 2026-04-01 02:22:09 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:22:09.232073 | orchestrator | 2026-04-01 02:22:09 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:22:12.278193 | orchestrator | 2026-04-01 02:22:12 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:22:12.279645 | orchestrator | 2026-04-01 02:22:12 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:22:12.279680 | orchestrator | 2026-04-01 02:22:12 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:22:15.327014 | orchestrator | 2026-04-01 02:22:15 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:22:15.328844 | orchestrator | 2026-04-01 02:22:15 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:22:15.329275 | orchestrator | 2026-04-01 02:22:15 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:22:18.376658 | orchestrator | 2026-04-01 02:22:18 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:22:18.379362 | orchestrator | 2026-04-01 02:22:18 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:22:18.379438 | orchestrator | 2026-04-01 02:22:18 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:22:21.425523 | orchestrator | 2026-04-01 02:22:21 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:22:21.427471 | orchestrator | 2026-04-01 02:22:21 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:22:21.427538 | orchestrator | 2026-04-01 02:22:21 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:22:24.472001 | orchestrator | 2026-04-01 02:22:24 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:22:24.473488 | orchestrator | 2026-04-01 02:22:24 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:22:24.473567 | orchestrator | 2026-04-01 02:22:24 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:22:27.511980 | orchestrator | 2026-04-01 02:22:27 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:22:27.513042 | orchestrator | 2026-04-01 02:22:27 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:22:27.513091 | orchestrator | 2026-04-01 02:22:27 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:22:30.564057 | orchestrator | 2026-04-01 02:22:30 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:22:30.565822 | orchestrator | 2026-04-01 02:22:30 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:22:30.565853 | orchestrator | 2026-04-01 02:22:30 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:22:33.612683 | orchestrator | 2026-04-01 02:22:33 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:22:33.614442 | orchestrator | 2026-04-01 02:22:33 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:22:33.614510 | orchestrator | 2026-04-01 02:22:33 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:22:36.652340 | orchestrator | 2026-04-01 02:22:36 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:22:36.653016 | orchestrator | 2026-04-01 02:22:36 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:22:36.653047 | orchestrator | 2026-04-01 02:22:36 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:22:39.704404 | orchestrator | 2026-04-01 02:22:39 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:22:39.705672 | orchestrator | 2026-04-01 02:22:39 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:22:39.705693 | orchestrator | 2026-04-01 02:22:39 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:22:42.750867 | orchestrator | 2026-04-01 02:22:42 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:22:42.752841 | orchestrator | 2026-04-01 02:22:42 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:22:42.753308 | orchestrator | 2026-04-01 02:22:42 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:22:45.796224 | orchestrator | 2026-04-01 02:22:45 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:22:45.797431 | orchestrator | 2026-04-01 02:22:45 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:22:45.797451 | orchestrator | 2026-04-01 02:22:45 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:22:48.843605 | orchestrator | 2026-04-01 02:22:48 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:22:48.846083 | orchestrator | 2026-04-01 02:22:48 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:22:48.846154 | orchestrator | 2026-04-01 02:22:48 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:22:51.891657 | orchestrator | 2026-04-01 02:22:51 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:22:51.894358 | orchestrator | 2026-04-01 02:22:51 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:22:51.894426 | orchestrator | 2026-04-01 02:22:51 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:22:54.941289 | orchestrator | 2026-04-01 02:22:54 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:22:54.942454 | orchestrator | 2026-04-01 02:22:54 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:22:54.942516 | orchestrator | 2026-04-01 02:22:54 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:22:57.984063 | orchestrator | 2026-04-01 02:22:57 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:22:57.985398 | orchestrator | 2026-04-01 02:22:57 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:22:57.985485 | orchestrator | 2026-04-01 02:22:57 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:23:01.038910 | orchestrator | 2026-04-01 02:23:01 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:23:01.044651 | orchestrator | 2026-04-01 02:23:01 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:23:01.044702 | orchestrator | 2026-04-01 02:23:01 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:23:04.094836 | orchestrator | 2026-04-01 02:23:04 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:23:04.095810 | orchestrator | 2026-04-01 02:23:04 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:23:04.095830 | orchestrator | 2026-04-01 02:23:04 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:23:07.143584 | orchestrator | 2026-04-01 02:23:07 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:23:07.146244 | orchestrator | 2026-04-01 02:23:07 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:23:07.146298 | orchestrator | 2026-04-01 02:23:07 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:23:10.191463 | orchestrator | 2026-04-01 02:23:10 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:23:10.192187 | orchestrator | 2026-04-01 02:23:10 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:23:10.192235 | orchestrator | 2026-04-01 02:23:10 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:23:13.230803 | orchestrator | 2026-04-01 02:23:13 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:23:13.231430 | orchestrator | 2026-04-01 02:23:13 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:23:13.232045 | orchestrator | 2026-04-01 02:23:13 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:23:16.276195 | orchestrator | 2026-04-01 02:23:16 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:23:16.277366 | orchestrator | 2026-04-01 02:23:16 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:23:16.277596 | orchestrator | 2026-04-01 02:23:16 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:23:19.317936 | orchestrator | 2026-04-01 02:23:19 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:23:19.320323 | orchestrator | 2026-04-01 02:23:19 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:23:19.320384 | orchestrator | 2026-04-01 02:23:19 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:23:22.364143 | orchestrator | 2026-04-01 02:23:22 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:23:22.366482 | orchestrator | 2026-04-01 02:23:22 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:23:22.366526 | orchestrator | 2026-04-01 02:23:22 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:23:25.420547 | orchestrator | 2026-04-01 02:23:25 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:23:25.424160 | orchestrator | 2026-04-01 02:23:25 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:23:25.424227 | orchestrator | 2026-04-01 02:23:25 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:23:28.467521 | orchestrator | 2026-04-01 02:23:28 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:23:28.469057 | orchestrator | 2026-04-01 02:23:28 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:23:28.469134 | orchestrator | 2026-04-01 02:23:28 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:23:31.514235 | orchestrator | 2026-04-01 02:23:31 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:23:31.515539 | orchestrator | 2026-04-01 02:23:31 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:23:31.515582 | orchestrator | 2026-04-01 02:23:31 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:23:34.564764 | orchestrator | 2026-04-01 02:23:34 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:23:34.566472 | orchestrator | 2026-04-01 02:23:34 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:23:34.566529 | orchestrator | 2026-04-01 02:23:34 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:23:37.604790 | orchestrator | 2026-04-01 02:23:37 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:25:37.720615 | orchestrator | 2026-04-01 02:25:37 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:25:37.720735 | orchestrator | 2026-04-01 02:25:37 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:25:40.766319 | orchestrator | 2026-04-01 02:25:40 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:25:40.768073 | orchestrator | 2026-04-01 02:25:40 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:25:40.768171 | orchestrator | 2026-04-01 02:25:40 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:25:43.811966 | orchestrator | 2026-04-01 02:25:43 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:25:43.814007 | orchestrator | 2026-04-01 02:25:43 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:25:43.814225 | orchestrator | 2026-04-01 02:25:43 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:25:46.861884 | orchestrator | 2026-04-01 02:25:46 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:25:46.863949 | orchestrator | 2026-04-01 02:25:46 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:25:46.864062 | orchestrator | 2026-04-01 02:25:46 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:25:49.908094 | orchestrator | 2026-04-01 02:25:49 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:25:49.910907 | orchestrator | 2026-04-01 02:25:49 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:25:49.910995 | orchestrator | 2026-04-01 02:25:49 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:25:52.969608 | orchestrator | 2026-04-01 02:25:52 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:25:52.969916 | orchestrator | 2026-04-01 02:25:52 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:25:52.969955 | orchestrator | 2026-04-01 02:25:52 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:25:56.024965 | orchestrator | 2026-04-01 02:25:56 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:25:56.026304 | orchestrator | 2026-04-01 02:25:56 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:25:56.026390 | orchestrator | 2026-04-01 02:25:56 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:25:59.071022 | orchestrator | 2026-04-01 02:25:59 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:25:59.072443 | orchestrator | 2026-04-01 02:25:59 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:25:59.072495 | orchestrator | 2026-04-01 02:25:59 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:26:02.118948 | orchestrator | 2026-04-01 02:26:02 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:26:02.121554 | orchestrator | 2026-04-01 02:26:02 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:26:02.121626 | orchestrator | 2026-04-01 02:26:02 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:26:05.164819 | orchestrator | 2026-04-01 02:26:05 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:26:05.165947 | orchestrator | 2026-04-01 02:26:05 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:26:05.166085 | orchestrator | 2026-04-01 02:26:05 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:26:08.211767 | orchestrator | 2026-04-01 02:26:08 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:26:08.213710 | orchestrator | 2026-04-01 02:26:08 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:26:08.213779 | orchestrator | 2026-04-01 02:26:08 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:26:11.259808 | orchestrator | 2026-04-01 02:26:11 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:26:11.260647 | orchestrator | 2026-04-01 02:26:11 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:26:11.260738 | orchestrator | 2026-04-01 02:26:11 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:26:14.299024 | orchestrator | 2026-04-01 02:26:14 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:26:14.299944 | orchestrator | 2026-04-01 02:26:14 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:26:14.299984 | orchestrator | 2026-04-01 02:26:14 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:26:17.343308 | orchestrator | 2026-04-01 02:26:17 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:26:17.344883 | orchestrator | 2026-04-01 02:26:17 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:26:17.344924 | orchestrator | 2026-04-01 02:26:17 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:26:20.388954 | orchestrator | 2026-04-01 02:26:20 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:26:20.390498 | orchestrator | 2026-04-01 02:26:20 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:26:20.391096 | orchestrator | 2026-04-01 02:26:20 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:26:23.434815 | orchestrator | 2026-04-01 02:26:23 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:26:23.436441 | orchestrator | 2026-04-01 02:26:23 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:26:23.436493 | orchestrator | 2026-04-01 02:26:23 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:26:26.489891 | orchestrator | 2026-04-01 02:26:26 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:26:26.491439 | orchestrator | 2026-04-01 02:26:26 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:26:26.491496 | orchestrator | 2026-04-01 02:26:26 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:26:29.538830 | orchestrator | 2026-04-01 02:26:29 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:26:29.540198 | orchestrator | 2026-04-01 02:26:29 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:26:29.540261 | orchestrator | 2026-04-01 02:26:29 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:26:32.584121 | orchestrator | 2026-04-01 02:26:32 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:26:32.586119 | orchestrator | 2026-04-01 02:26:32 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:26:32.586352 | orchestrator | 2026-04-01 02:26:32 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:26:35.634486 | orchestrator | 2026-04-01 02:26:35 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:26:35.635463 | orchestrator | 2026-04-01 02:26:35 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:26:35.635516 | orchestrator | 2026-04-01 02:26:35 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:26:38.683123 | orchestrator | 2026-04-01 02:26:38 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:26:38.684768 | orchestrator | 2026-04-01 02:26:38 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:26:38.684820 | orchestrator | 2026-04-01 02:26:38 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:26:41.734528 | orchestrator | 2026-04-01 02:26:41 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:26:41.736149 | orchestrator | 2026-04-01 02:26:41 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:26:41.736222 | orchestrator | 2026-04-01 02:26:41 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:26:44.778633 | orchestrator | 2026-04-01 02:26:44 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:26:44.781023 | orchestrator | 2026-04-01 02:26:44 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:26:44.781103 | orchestrator | 2026-04-01 02:26:44 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:26:47.825493 | orchestrator | 2026-04-01 02:26:47 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:26:47.827500 | orchestrator | 2026-04-01 02:26:47 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:26:47.827621 | orchestrator | 2026-04-01 02:26:47 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:26:50.870470 | orchestrator | 2026-04-01 02:26:50 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:26:50.872309 | orchestrator | 2026-04-01 02:26:50 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:26:50.872384 | orchestrator | 2026-04-01 02:26:50 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:26:53.923334 | orchestrator | 2026-04-01 02:26:53 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:26:53.924434 | orchestrator | 2026-04-01 02:26:53 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:26:53.924867 | orchestrator | 2026-04-01 02:26:53 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:26:56.974378 | orchestrator | 2026-04-01 02:26:56 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:26:56.975955 | orchestrator | 2026-04-01 02:26:56 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:26:56.976008 | orchestrator | 2026-04-01 02:26:56 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:27:00.019839 | orchestrator | 2026-04-01 02:27:00 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:27:00.021564 | orchestrator | 2026-04-01 02:27:00 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:27:00.021632 | orchestrator | 2026-04-01 02:27:00 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:27:03.066261 | orchestrator | 2026-04-01 02:27:03 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:27:03.067756 | orchestrator | 2026-04-01 02:27:03 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:27:03.067812 | orchestrator | 2026-04-01 02:27:03 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:27:06.111088 | orchestrator | 2026-04-01 02:27:06 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:27:06.113077 | orchestrator | 2026-04-01 02:27:06 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:27:06.113122 | orchestrator | 2026-04-01 02:27:06 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:27:09.159473 | orchestrator | 2026-04-01 02:27:09 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:27:09.160815 | orchestrator | 2026-04-01 02:27:09 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:27:09.160853 | orchestrator | 2026-04-01 02:27:09 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:27:12.213490 | orchestrator | 2026-04-01 02:27:12 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:27:12.215932 | orchestrator | 2026-04-01 02:27:12 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:27:12.215966 | orchestrator | 2026-04-01 02:27:12 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:27:15.258818 | orchestrator | 2026-04-01 02:27:15 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:27:15.261251 | orchestrator | 2026-04-01 02:27:15 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:27:15.261461 | orchestrator | 2026-04-01 02:27:15 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:27:18.307732 | orchestrator | 2026-04-01 02:27:18 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:27:18.309495 | orchestrator | 2026-04-01 02:27:18 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:27:18.309542 | orchestrator | 2026-04-01 02:27:18 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:27:21.356212 | orchestrator | 2026-04-01 02:27:21 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:27:21.357663 | orchestrator | 2026-04-01 02:27:21 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:27:21.357716 | orchestrator | 2026-04-01 02:27:21 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:27:24.403552 | orchestrator | 2026-04-01 02:27:24 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:27:24.407872 | orchestrator | 2026-04-01 02:27:24 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:27:24.408043 | orchestrator | 2026-04-01 02:27:24 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:27:27.455776 | orchestrator | 2026-04-01 02:27:27 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:27:27.457288 | orchestrator | 2026-04-01 02:27:27 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:27:27.457362 | orchestrator | 2026-04-01 02:27:27 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:27:30.498189 | orchestrator | 2026-04-01 02:27:30 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:27:30.500229 | orchestrator | 2026-04-01 02:27:30 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:27:30.500292 | orchestrator | 2026-04-01 02:27:30 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:27:33.546845 | orchestrator | 2026-04-01 02:27:33 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:27:33.549186 | orchestrator | 2026-04-01 02:27:33 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:27:33.549277 | orchestrator | 2026-04-01 02:27:33 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:27:36.591104 | orchestrator | 2026-04-01 02:27:36 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:27:36.593593 | orchestrator | 2026-04-01 02:27:36 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:27:36.593633 | orchestrator | 2026-04-01 02:27:36 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:27:39.636736 | orchestrator | 2026-04-01 02:27:39 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:27:39.639697 | orchestrator | 2026-04-01 02:27:39 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:27:39.639885 | orchestrator | 2026-04-01 02:27:39 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:27:42.687787 | orchestrator | 2026-04-01 02:27:42 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:27:42.689155 | orchestrator | 2026-04-01 02:27:42 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:27:42.689191 | orchestrator | 2026-04-01 02:27:42 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:27:45.732857 | orchestrator | 2026-04-01 02:27:45 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:27:45.734186 | orchestrator | 2026-04-01 02:27:45 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:27:45.734240 | orchestrator | 2026-04-01 02:27:45 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:27:48.778120 | orchestrator | 2026-04-01 02:27:48 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:27:48.779853 | orchestrator | 2026-04-01 02:27:48 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:27:48.779912 | orchestrator | 2026-04-01 02:27:48 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:27:51.825091 | orchestrator | 2026-04-01 02:27:51 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:27:51.827240 | orchestrator | 2026-04-01 02:27:51 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:27:51.827304 | orchestrator | 2026-04-01 02:27:51 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:27:54.872648 | orchestrator | 2026-04-01 02:27:54 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:27:54.875374 | orchestrator | 2026-04-01 02:27:54 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:27:54.875455 | orchestrator | 2026-04-01 02:27:54 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:27:57.920177 | orchestrator | 2026-04-01 02:27:57 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:27:57.921973 | orchestrator | 2026-04-01 02:27:57 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:27:57.922097 | orchestrator | 2026-04-01 02:27:57 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:28:00.974600 | orchestrator | 2026-04-01 02:28:00 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:28:00.976122 | orchestrator | 2026-04-01 02:28:00 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:28:00.976162 | orchestrator | 2026-04-01 02:28:00 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:28:04.031983 | orchestrator | 2026-04-01 02:28:04 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:28:04.034497 | orchestrator | 2026-04-01 02:28:04 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:28:04.035341 | orchestrator | 2026-04-01 02:28:04 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:28:07.068272 | orchestrator | 2026-04-01 02:28:07 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:28:07.070667 | orchestrator | 2026-04-01 02:28:07 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:28:07.070734 | orchestrator | 2026-04-01 02:28:07 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:28:10.113160 | orchestrator | 2026-04-01 02:28:10 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:28:10.115858 | orchestrator | 2026-04-01 02:28:10 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:28:10.115920 | orchestrator | 2026-04-01 02:28:10 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:28:13.157888 | orchestrator | 2026-04-01 02:28:13 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:28:13.161368 | orchestrator | 2026-04-01 02:28:13 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:28:13.161662 | orchestrator | 2026-04-01 02:28:13 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:28:16.205880 | orchestrator | 2026-04-01 02:28:16 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:28:16.209619 | orchestrator | 2026-04-01 02:28:16 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:28:16.209647 | orchestrator | 2026-04-01 02:28:16 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:28:19.250597 | orchestrator | 2026-04-01 02:28:19 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:28:19.251446 | orchestrator | 2026-04-01 02:28:19 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:28:19.251488 | orchestrator | 2026-04-01 02:28:19 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:28:22.295462 | orchestrator | 2026-04-01 02:28:22 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:28:22.297519 | orchestrator | 2026-04-01 02:28:22 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:28:22.297594 | orchestrator | 2026-04-01 02:28:22 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:28:25.345530 | orchestrator | 2026-04-01 02:28:25 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:28:25.347759 | orchestrator | 2026-04-01 02:28:25 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:28:25.347841 | orchestrator | 2026-04-01 02:28:25 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:28:28.388871 | orchestrator | 2026-04-01 02:28:28 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:28:28.390806 | orchestrator | 2026-04-01 02:28:28 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:28:28.390847 | orchestrator | 2026-04-01 02:28:28 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:28:31.431521 | orchestrator | 2026-04-01 02:28:31 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:28:31.433706 | orchestrator | 2026-04-01 02:28:31 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:28:31.433780 | orchestrator | 2026-04-01 02:28:31 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:28:34.480343 | orchestrator | 2026-04-01 02:28:34 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:28:34.482562 | orchestrator | 2026-04-01 02:28:34 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:28:34.482663 | orchestrator | 2026-04-01 02:28:34 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:28:37.526349 | orchestrator | 2026-04-01 02:28:37 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:28:37.527911 | orchestrator | 2026-04-01 02:28:37 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:28:37.527989 | orchestrator | 2026-04-01 02:28:37 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:28:40.580526 | orchestrator | 2026-04-01 02:28:40 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:28:40.582828 | orchestrator | 2026-04-01 02:28:40 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:28:40.582888 | orchestrator | 2026-04-01 02:28:40 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:28:43.630171 | orchestrator | 2026-04-01 02:28:43 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:28:43.632760 | orchestrator | 2026-04-01 02:28:43 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:28:43.632965 | orchestrator | 2026-04-01 02:28:43 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:28:46.675021 | orchestrator | 2026-04-01 02:28:46 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:28:46.676390 | orchestrator | 2026-04-01 02:28:46 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:28:46.676693 | orchestrator | 2026-04-01 02:28:46 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:28:49.709344 | orchestrator | 2026-04-01 02:28:49 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:28:49.710807 | orchestrator | 2026-04-01 02:28:49 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:28:49.710847 | orchestrator | 2026-04-01 02:28:49 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:28:52.760419 | orchestrator | 2026-04-01 02:28:52 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:28:52.762204 | orchestrator | 2026-04-01 02:28:52 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:28:52.762228 | orchestrator | 2026-04-01 02:28:52 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:28:55.804641 | orchestrator | 2026-04-01 02:28:55 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:28:55.805717 | orchestrator | 2026-04-01 02:28:55 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:28:55.805793 | orchestrator | 2026-04-01 02:28:55 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:28:58.846236 | orchestrator | 2026-04-01 02:28:58 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:28:58.847556 | orchestrator | 2026-04-01 02:28:58 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:28:58.847607 | orchestrator | 2026-04-01 02:28:58 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:29:01.892173 | orchestrator | 2026-04-01 02:29:01 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:29:01.893660 | orchestrator | 2026-04-01 02:29:01 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:29:01.893738 | orchestrator | 2026-04-01 02:29:01 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:29:04.928298 | orchestrator | 2026-04-01 02:29:04 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:29:04.929244 | orchestrator | 2026-04-01 02:29:04 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:29:04.929300 | orchestrator | 2026-04-01 02:29:04 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:29:07.974578 | orchestrator | 2026-04-01 02:29:07 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:29:07.976070 | orchestrator | 2026-04-01 02:29:07 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:29:07.976159 | orchestrator | 2026-04-01 02:29:07 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:29:11.021557 | orchestrator | 2026-04-01 02:29:11 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:29:11.022965 | orchestrator | 2026-04-01 02:29:11 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:29:11.023025 | orchestrator | 2026-04-01 02:29:11 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:29:14.067814 | orchestrator | 2026-04-01 02:29:14 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:29:14.068752 | orchestrator | 2026-04-01 02:29:14 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:29:14.068810 | orchestrator | 2026-04-01 02:29:14 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:29:17.119710 | orchestrator | 2026-04-01 02:29:17 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:29:17.121785 | orchestrator | 2026-04-01 02:29:17 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:29:17.121841 | orchestrator | 2026-04-01 02:29:17 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:29:20.172936 | orchestrator | 2026-04-01 02:29:20 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:29:20.174322 | orchestrator | 2026-04-01 02:29:20 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:29:20.174582 | orchestrator | 2026-04-01 02:29:20 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:29:23.216736 | orchestrator | 2026-04-01 02:29:23 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:29:23.217840 | orchestrator | 2026-04-01 02:29:23 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:29:23.217927 | orchestrator | 2026-04-01 02:29:23 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:29:26.264922 | orchestrator | 2026-04-01 02:29:26 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:29:26.266653 | orchestrator | 2026-04-01 02:29:26 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:29:26.266707 | orchestrator | 2026-04-01 02:29:26 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:29:29.308601 | orchestrator | 2026-04-01 02:29:29 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:29:29.310269 | orchestrator | 2026-04-01 02:29:29 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:29:29.310374 | orchestrator | 2026-04-01 02:29:29 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:29:32.362379 | orchestrator | 2026-04-01 02:29:32 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:29:32.363864 | orchestrator | 2026-04-01 02:29:32 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:29:32.363924 | orchestrator | 2026-04-01 02:29:32 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:29:35.405808 | orchestrator | 2026-04-01 02:29:35 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:29:35.407776 | orchestrator | 2026-04-01 02:29:35 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:29:35.407840 | orchestrator | 2026-04-01 02:29:35 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:29:38.455789 | orchestrator | 2026-04-01 02:29:38 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:29:38.457030 | orchestrator | 2026-04-01 02:29:38 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:29:38.457063 | orchestrator | 2026-04-01 02:29:38 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:29:41.504794 | orchestrator | 2026-04-01 02:29:41 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:29:41.507254 | orchestrator | 2026-04-01 02:29:41 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:29:41.507292 | orchestrator | 2026-04-01 02:29:41 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:29:44.551921 | orchestrator | 2026-04-01 02:29:44 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:29:44.553814 | orchestrator | 2026-04-01 02:29:44 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:29:44.553869 | orchestrator | 2026-04-01 02:29:44 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:29:47.603098 | orchestrator | 2026-04-01 02:29:47 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:29:47.606287 | orchestrator | 2026-04-01 02:29:47 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:29:47.606391 | orchestrator | 2026-04-01 02:29:47 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:29:50.653648 | orchestrator | 2026-04-01 02:29:50 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:29:50.655711 | orchestrator | 2026-04-01 02:29:50 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:29:50.655752 | orchestrator | 2026-04-01 02:29:50 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:29:53.698852 | orchestrator | 2026-04-01 02:29:53 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:29:53.700605 | orchestrator | 2026-04-01 02:29:53 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:29:53.700730 | orchestrator | 2026-04-01 02:29:53 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:29:56.740837 | orchestrator | 2026-04-01 02:29:56 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:29:56.742838 | orchestrator | 2026-04-01 02:29:56 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:29:56.742977 | orchestrator | 2026-04-01 02:29:56 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:29:59.791864 | orchestrator | 2026-04-01 02:29:59 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:29:59.793849 | orchestrator | 2026-04-01 02:29:59 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:29:59.793907 | orchestrator | 2026-04-01 02:29:59 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:30:02.837447 | orchestrator | 2026-04-01 02:30:02 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:30:02.839061 | orchestrator | 2026-04-01 02:30:02 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:30:02.839182 | orchestrator | 2026-04-01 02:30:02 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:30:05.884345 | orchestrator | 2026-04-01 02:30:05 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:30:05.886053 | orchestrator | 2026-04-01 02:30:05 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:30:05.886108 | orchestrator | 2026-04-01 02:30:05 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:30:08.931259 | orchestrator | 2026-04-01 02:30:08 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:30:08.931607 | orchestrator | 2026-04-01 02:30:08 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:30:08.931684 | orchestrator | 2026-04-01 02:30:08 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:30:11.975991 | orchestrator | 2026-04-01 02:30:11 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:30:11.976150 | orchestrator | 2026-04-01 02:30:11 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:30:11.976287 | orchestrator | 2026-04-01 02:30:11 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:30:15.021022 | orchestrator | 2026-04-01 02:30:15 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:30:15.023528 | orchestrator | 2026-04-01 02:30:15 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:30:15.023681 | orchestrator | 2026-04-01 02:30:15 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:30:18.072351 | orchestrator | 2026-04-01 02:30:18 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:30:18.074166 | orchestrator | 2026-04-01 02:30:18 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:30:18.074234 | orchestrator | 2026-04-01 02:30:18 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:30:21.116052 | orchestrator | 2026-04-01 02:30:21 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:30:21.116866 | orchestrator | 2026-04-01 02:30:21 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:30:21.116927 | orchestrator | 2026-04-01 02:30:21 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:30:24.162978 | orchestrator | 2026-04-01 02:30:24 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:30:24.165479 | orchestrator | 2026-04-01 02:30:24 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:30:24.165538 | orchestrator | 2026-04-01 02:30:24 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:30:27.205963 | orchestrator | 2026-04-01 02:30:27 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:30:27.208662 | orchestrator | 2026-04-01 02:30:27 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:30:27.208864 | orchestrator | 2026-04-01 02:30:27 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:30:30.254705 | orchestrator | 2026-04-01 02:30:30 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:30:30.255697 | orchestrator | 2026-04-01 02:30:30 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:30:30.255721 | orchestrator | 2026-04-01 02:30:30 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:30:33.300363 | orchestrator | 2026-04-01 02:30:33 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:30:33.302196 | orchestrator | 2026-04-01 02:30:33 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:30:33.302215 | orchestrator | 2026-04-01 02:30:33 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:30:36.349263 | orchestrator | 2026-04-01 02:30:36 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:30:36.351023 | orchestrator | 2026-04-01 02:30:36 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:30:36.351108 | orchestrator | 2026-04-01 02:30:36 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:30:39.395797 | orchestrator | 2026-04-01 02:30:39 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:30:39.397372 | orchestrator | 2026-04-01 02:30:39 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:30:39.397409 | orchestrator | 2026-04-01 02:30:39 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:30:42.437193 | orchestrator | 2026-04-01 02:30:42 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:30:42.438686 | orchestrator | 2026-04-01 02:30:42 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:30:42.438751 | orchestrator | 2026-04-01 02:30:42 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:30:45.483228 | orchestrator | 2026-04-01 02:30:45 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:30:45.486596 | orchestrator | 2026-04-01 02:30:45 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:30:45.486703 | orchestrator | 2026-04-01 02:30:45 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:30:48.532844 | orchestrator | 2026-04-01 02:30:48 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:30:48.534396 | orchestrator | 2026-04-01 02:30:48 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:30:48.534562 | orchestrator | 2026-04-01 02:30:48 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:30:51.577872 | orchestrator | 2026-04-01 02:30:51 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:30:51.579997 | orchestrator | 2026-04-01 02:30:51 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:30:51.580045 | orchestrator | 2026-04-01 02:30:51 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:30:54.626888 | orchestrator | 2026-04-01 02:30:54 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:30:54.628185 | orchestrator | 2026-04-01 02:30:54 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:30:54.628323 | orchestrator | 2026-04-01 02:30:54 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:30:57.677174 | orchestrator | 2026-04-01 02:30:57 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:30:57.679780 | orchestrator | 2026-04-01 02:30:57 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:30:57.679824 | orchestrator | 2026-04-01 02:30:57 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:31:00.731911 | orchestrator | 2026-04-01 02:31:00 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:31:00.734996 | orchestrator | 2026-04-01 02:31:00 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:31:00.735069 | orchestrator | 2026-04-01 02:31:00 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:31:03.789288 | orchestrator | 2026-04-01 02:31:03 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:31:03.791227 | orchestrator | 2026-04-01 02:31:03 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:31:03.791270 | orchestrator | 2026-04-01 02:31:03 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:31:06.845284 | orchestrator | 2026-04-01 02:31:06 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:31:06.846928 | orchestrator | 2026-04-01 02:31:06 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:31:06.847100 | orchestrator | 2026-04-01 02:31:06 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:31:09.894132 | orchestrator | 2026-04-01 02:31:09 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:31:09.898178 | orchestrator | 2026-04-01 02:31:09 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:31:09.898256 | orchestrator | 2026-04-01 02:31:09 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:31:12.953345 | orchestrator | 2026-04-01 02:31:12 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:31:12.956743 | orchestrator | 2026-04-01 02:31:12 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:31:12.956846 | orchestrator | 2026-04-01 02:31:12 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:31:16.004565 | orchestrator | 2026-04-01 02:31:15 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:31:16.006401 | orchestrator | 2026-04-01 02:31:15 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:31:16.006457 | orchestrator | 2026-04-01 02:31:15 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:31:19.054970 | orchestrator | 2026-04-01 02:31:19 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:31:19.056722 | orchestrator | 2026-04-01 02:31:19 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:31:19.056789 | orchestrator | 2026-04-01 02:31:19 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:31:22.105593 | orchestrator | 2026-04-01 02:31:22 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:31:22.106875 | orchestrator | 2026-04-01 02:31:22 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:31:22.106920 | orchestrator | 2026-04-01 02:31:22 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:31:25.150871 | orchestrator | 2026-04-01 02:31:25 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:31:25.152639 | orchestrator | 2026-04-01 02:31:25 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:31:25.152711 | orchestrator | 2026-04-01 02:31:25 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:31:28.203644 | orchestrator | 2026-04-01 02:31:28 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:31:28.205481 | orchestrator | 2026-04-01 02:31:28 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:31:28.205567 | orchestrator | 2026-04-01 02:31:28 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:31:31.246256 | orchestrator | 2026-04-01 02:31:31 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:31:31.247957 | orchestrator | 2026-04-01 02:31:31 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:31:31.248050 | orchestrator | 2026-04-01 02:31:31 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:31:34.292783 | orchestrator | 2026-04-01 02:31:34 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:31:34.293957 | orchestrator | 2026-04-01 02:31:34 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:31:34.294414 | orchestrator | 2026-04-01 02:31:34 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:31:37.346364 | orchestrator | 2026-04-01 02:31:37 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:31:37.347561 | orchestrator | 2026-04-01 02:31:37 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:31:37.347615 | orchestrator | 2026-04-01 02:31:37 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:31:40.397237 | orchestrator | 2026-04-01 02:31:40 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:31:40.397336 | orchestrator | 2026-04-01 02:31:40 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:31:40.397384 | orchestrator | 2026-04-01 02:31:40 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:31:43.441761 | orchestrator | 2026-04-01 02:31:43 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:31:43.443110 | orchestrator | 2026-04-01 02:31:43 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:31:43.443145 | orchestrator | 2026-04-01 02:31:43 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:31:46.493688 | orchestrator | 2026-04-01 02:31:46 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:31:46.495112 | orchestrator | 2026-04-01 02:31:46 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:31:46.495158 | orchestrator | 2026-04-01 02:31:46 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:31:49.545390 | orchestrator | 2026-04-01 02:31:49 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:31:49.545522 | orchestrator | 2026-04-01 02:31:49 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:31:49.545546 | orchestrator | 2026-04-01 02:31:49 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:31:52.590639 | orchestrator | 2026-04-01 02:31:52 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:31:52.595465 | orchestrator | 2026-04-01 02:31:52 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:31:52.595552 | orchestrator | 2026-04-01 02:31:52 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:31:55.634441 | orchestrator | 2026-04-01 02:31:55 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:31:55.635814 | orchestrator | 2026-04-01 02:31:55 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:31:55.635852 | orchestrator | 2026-04-01 02:31:55 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:31:58.685351 | orchestrator | 2026-04-01 02:31:58 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:31:58.687010 | orchestrator | 2026-04-01 02:31:58 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:31:58.687066 | orchestrator | 2026-04-01 02:31:58 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:32:01.734722 | orchestrator | 2026-04-01 02:32:01 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:32:01.737101 | orchestrator | 2026-04-01 02:32:01 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:32:01.737186 | orchestrator | 2026-04-01 02:32:01 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:32:04.793424 | orchestrator | 2026-04-01 02:32:04 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:32:04.795750 | orchestrator | 2026-04-01 02:32:04 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:32:04.796123 | orchestrator | 2026-04-01 02:32:04 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:32:07.846532 | orchestrator | 2026-04-01 02:32:07 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:32:07.847299 | orchestrator | 2026-04-01 02:32:07 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:32:07.847330 | orchestrator | 2026-04-01 02:32:07 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:32:10.896686 | orchestrator | 2026-04-01 02:32:10 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:32:10.898907 | orchestrator | 2026-04-01 02:32:10 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:32:10.898985 | orchestrator | 2026-04-01 02:32:10 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:32:13.944112 | orchestrator | 2026-04-01 02:32:13 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:32:13.945509 | orchestrator | 2026-04-01 02:32:13 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:32:13.945555 | orchestrator | 2026-04-01 02:32:13 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:32:16.991072 | orchestrator | 2026-04-01 02:32:16 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:32:16.993670 | orchestrator | 2026-04-01 02:32:16 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:32:16.993825 | orchestrator | 2026-04-01 02:32:16 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:32:20.041143 | orchestrator | 2026-04-01 02:32:20 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:32:20.042895 | orchestrator | 2026-04-01 02:32:20 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:32:20.042956 | orchestrator | 2026-04-01 02:32:20 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:32:23.093226 | orchestrator | 2026-04-01 02:32:23 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:32:23.094446 | orchestrator | 2026-04-01 02:32:23 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:32:23.094500 | orchestrator | 2026-04-01 02:32:23 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:32:26.141839 | orchestrator | 2026-04-01 02:32:26 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:32:26.142670 | orchestrator | 2026-04-01 02:32:26 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:32:26.142705 | orchestrator | 2026-04-01 02:32:26 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:32:29.188546 | orchestrator | 2026-04-01 02:32:29 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:32:29.191011 | orchestrator | 2026-04-01 02:32:29 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:32:29.191130 | orchestrator | 2026-04-01 02:32:29 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:32:32.238587 | orchestrator | 2026-04-01 02:32:32 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:32:32.241262 | orchestrator | 2026-04-01 02:32:32 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:32:32.241380 | orchestrator | 2026-04-01 02:32:32 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:32:35.294778 | orchestrator | 2026-04-01 02:32:35 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:32:35.296639 | orchestrator | 2026-04-01 02:32:35 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:32:35.296684 | orchestrator | 2026-04-01 02:32:35 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:32:38.346274 | orchestrator | 2026-04-01 02:32:38 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:32:38.347616 | orchestrator | 2026-04-01 02:32:38 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:32:38.347648 | orchestrator | 2026-04-01 02:32:38 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:32:41.396335 | orchestrator | 2026-04-01 02:32:41 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:32:41.398186 | orchestrator | 2026-04-01 02:32:41 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:32:41.398229 | orchestrator | 2026-04-01 02:32:41 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:32:44.445348 | orchestrator | 2026-04-01 02:32:44 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:32:44.447881 | orchestrator | 2026-04-01 02:32:44 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:32:44.447952 | orchestrator | 2026-04-01 02:32:44 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:32:47.489570 | orchestrator | 2026-04-01 02:32:47 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:32:47.492213 | orchestrator | 2026-04-01 02:32:47 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:32:47.492278 | orchestrator | 2026-04-01 02:32:47 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:32:50.544622 | orchestrator | 2026-04-01 02:32:50 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:32:50.545901 | orchestrator | 2026-04-01 02:32:50 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:32:50.545931 | orchestrator | 2026-04-01 02:32:50 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:32:53.594765 | orchestrator | 2026-04-01 02:32:53 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:32:53.595987 | orchestrator | 2026-04-01 02:32:53 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:32:53.596307 | orchestrator | 2026-04-01 02:32:53 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:32:56.641945 | orchestrator | 2026-04-01 02:32:56 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:32:56.644097 | orchestrator | 2026-04-01 02:32:56 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:32:56.644156 | orchestrator | 2026-04-01 02:32:56 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:32:59.692392 | orchestrator | 2026-04-01 02:32:59 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:32:59.693618 | orchestrator | 2026-04-01 02:32:59 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:32:59.693658 | orchestrator | 2026-04-01 02:32:59 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:33:02.740354 | orchestrator | 2026-04-01 02:33:02 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:33:02.741170 | orchestrator | 2026-04-01 02:33:02 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:33:02.741240 | orchestrator | 2026-04-01 02:33:02 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:33:05.789626 | orchestrator | 2026-04-01 02:33:05 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:33:05.792455 | orchestrator | 2026-04-01 02:33:05 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:33:05.792534 | orchestrator | 2026-04-01 02:33:05 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:33:08.841033 | orchestrator | 2026-04-01 02:33:08 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:33:08.843129 | orchestrator | 2026-04-01 02:33:08 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:33:08.843272 | orchestrator | 2026-04-01 02:33:08 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:33:11.891364 | orchestrator | 2026-04-01 02:33:11 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:33:11.893132 | orchestrator | 2026-04-01 02:33:11 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:33:11.893205 | orchestrator | 2026-04-01 02:33:11 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:33:14.941272 | orchestrator | 2026-04-01 02:33:14 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:33:14.942964 | orchestrator | 2026-04-01 02:33:14 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:33:14.942996 | orchestrator | 2026-04-01 02:33:14 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:33:17.993131 | orchestrator | 2026-04-01 02:33:17 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:33:17.995737 | orchestrator | 2026-04-01 02:33:17 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:33:17.995812 | orchestrator | 2026-04-01 02:33:17 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:33:21.048450 | orchestrator | 2026-04-01 02:33:21 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:33:21.050498 | orchestrator | 2026-04-01 02:33:21 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:33:21.050530 | orchestrator | 2026-04-01 02:33:21 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:33:24.093312 | orchestrator | 2026-04-01 02:33:24 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:33:24.096142 | orchestrator | 2026-04-01 02:33:24 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:33:24.096204 | orchestrator | 2026-04-01 02:33:24 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:33:27.139651 | orchestrator | 2026-04-01 02:33:27 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:33:27.141545 | orchestrator | 2026-04-01 02:33:27 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:33:27.141631 | orchestrator | 2026-04-01 02:33:27 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:33:30.188384 | orchestrator | 2026-04-01 02:33:30 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:33:30.189961 | orchestrator | 2026-04-01 02:33:30 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:33:30.190010 | orchestrator | 2026-04-01 02:33:30 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:33:33.232148 | orchestrator | 2026-04-01 02:33:33 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:33:33.234593 | orchestrator | 2026-04-01 02:33:33 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:33:33.234745 | orchestrator | 2026-04-01 02:33:33 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:33:36.288409 | orchestrator | 2026-04-01 02:33:36 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:33:36.290283 | orchestrator | 2026-04-01 02:33:36 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:33:36.290359 | orchestrator | 2026-04-01 02:33:36 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:33:39.338332 | orchestrator | 2026-04-01 02:33:39 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:33:39.339952 | orchestrator | 2026-04-01 02:33:39 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:33:39.339992 | orchestrator | 2026-04-01 02:33:39 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:33:42.387866 | orchestrator | 2026-04-01 02:33:42 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:33:42.389363 | orchestrator | 2026-04-01 02:33:42 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:33:42.389463 | orchestrator | 2026-04-01 02:33:42 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:33:45.441565 | orchestrator | 2026-04-01 02:33:45 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:33:45.443168 | orchestrator | 2026-04-01 02:33:45 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:33:45.443294 | orchestrator | 2026-04-01 02:33:45 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:33:48.491989 | orchestrator | 2026-04-01 02:33:48 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:33:48.494386 | orchestrator | 2026-04-01 02:33:48 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:33:48.494594 | orchestrator | 2026-04-01 02:33:48 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:33:51.544251 | orchestrator | 2026-04-01 02:33:51 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:33:51.546600 | orchestrator | 2026-04-01 02:33:51 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:33:51.546672 | orchestrator | 2026-04-01 02:33:51 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:33:54.596205 | orchestrator | 2026-04-01 02:33:54 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:33:54.598598 | orchestrator | 2026-04-01 02:33:54 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:33:54.598659 | orchestrator | 2026-04-01 02:33:54 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:33:57.641732 | orchestrator | 2026-04-01 02:33:57 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:33:57.643214 | orchestrator | 2026-04-01 02:33:57 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:33:57.643365 | orchestrator | 2026-04-01 02:33:57 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:34:00.690342 | orchestrator | 2026-04-01 02:34:00 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:34:00.691407 | orchestrator | 2026-04-01 02:34:00 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:34:00.691746 | orchestrator | 2026-04-01 02:34:00 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:34:03.739263 | orchestrator | 2026-04-01 02:34:03 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:34:03.741092 | orchestrator | 2026-04-01 02:34:03 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:34:03.741138 | orchestrator | 2026-04-01 02:34:03 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:34:06.788220 | orchestrator | 2026-04-01 02:34:06 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:34:06.789718 | orchestrator | 2026-04-01 02:34:06 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:34:06.789782 | orchestrator | 2026-04-01 02:34:06 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:34:09.835229 | orchestrator | 2026-04-01 02:34:09 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:34:09.836797 | orchestrator | 2026-04-01 02:34:09 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:34:09.836845 | orchestrator | 2026-04-01 02:34:09 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:34:12.878323 | orchestrator | 2026-04-01 02:34:12 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:34:12.879584 | orchestrator | 2026-04-01 02:34:12 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:34:12.879638 | orchestrator | 2026-04-01 02:34:12 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:34:15.922255 | orchestrator | 2026-04-01 02:34:15 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:34:15.924266 | orchestrator | 2026-04-01 02:34:15 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:34:15.924341 | orchestrator | 2026-04-01 02:34:15 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:34:18.969053 | orchestrator | 2026-04-01 02:34:18 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:34:18.970348 | orchestrator | 2026-04-01 02:34:18 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:34:18.970411 | orchestrator | 2026-04-01 02:34:18 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:34:22.018634 | orchestrator | 2026-04-01 02:34:22 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:34:22.020397 | orchestrator | 2026-04-01 02:34:22 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:34:22.020480 | orchestrator | 2026-04-01 02:34:22 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:34:25.074864 | orchestrator | 2026-04-01 02:34:25 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:34:25.077008 | orchestrator | 2026-04-01 02:34:25 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:34:25.077086 | orchestrator | 2026-04-01 02:34:25 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:34:28.133067 | orchestrator | 2026-04-01 02:34:28 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:34:28.134247 | orchestrator | 2026-04-01 02:34:28 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:34:28.134291 | orchestrator | 2026-04-01 02:34:28 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:34:31.184185 | orchestrator | 2026-04-01 02:34:31 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:34:31.186260 | orchestrator | 2026-04-01 02:34:31 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:34:31.186373 | orchestrator | 2026-04-01 02:34:31 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:34:34.235855 | orchestrator | 2026-04-01 02:34:34 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:34:34.239049 | orchestrator | 2026-04-01 02:34:34 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:34:34.239179 | orchestrator | 2026-04-01 02:34:34 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:34:37.287623 | orchestrator | 2026-04-01 02:34:37 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:34:37.289218 | orchestrator | 2026-04-01 02:34:37 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:34:37.289286 | orchestrator | 2026-04-01 02:34:37 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:34:40.335371 | orchestrator | 2026-04-01 02:34:40 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:34:40.337280 | orchestrator | 2026-04-01 02:34:40 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:34:40.337332 | orchestrator | 2026-04-01 02:34:40 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:34:43.385405 | orchestrator | 2026-04-01 02:34:43 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:34:43.386689 | orchestrator | 2026-04-01 02:34:43 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:34:43.386738 | orchestrator | 2026-04-01 02:34:43 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:34:46.433554 | orchestrator | 2026-04-01 02:34:46 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:34:46.434937 | orchestrator | 2026-04-01 02:34:46 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:34:46.435169 | orchestrator | 2026-04-01 02:34:46 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:34:49.482298 | orchestrator | 2026-04-01 02:34:49 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:34:49.483872 | orchestrator | 2026-04-01 02:34:49 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:34:49.483919 | orchestrator | 2026-04-01 02:34:49 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:34:52.533437 | orchestrator | 2026-04-01 02:34:52 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:34:52.535350 | orchestrator | 2026-04-01 02:34:52 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:34:52.535455 | orchestrator | 2026-04-01 02:34:52 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:34:55.577455 | orchestrator | 2026-04-01 02:34:55 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:34:55.577739 | orchestrator | 2026-04-01 02:34:55 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:34:55.577780 | orchestrator | 2026-04-01 02:34:55 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:34:58.625206 | orchestrator | 2026-04-01 02:34:58 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:34:58.627206 | orchestrator | 2026-04-01 02:34:58 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:34:58.627282 | orchestrator | 2026-04-01 02:34:58 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:35:01.673227 | orchestrator | 2026-04-01 02:35:01 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:35:01.674839 | orchestrator | 2026-04-01 02:35:01 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:35:01.674982 | orchestrator | 2026-04-01 02:35:01 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:35:04.720804 | orchestrator | 2026-04-01 02:35:04 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:35:04.724117 | orchestrator | 2026-04-01 02:35:04 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:35:04.724291 | orchestrator | 2026-04-01 02:35:04 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:35:07.773642 | orchestrator | 2026-04-01 02:35:07 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:35:07.774611 | orchestrator | 2026-04-01 02:35:07 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:35:07.774865 | orchestrator | 2026-04-01 02:35:07 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:35:10.827905 | orchestrator | 2026-04-01 02:35:10 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:35:10.829325 | orchestrator | 2026-04-01 02:35:10 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:35:10.829411 | orchestrator | 2026-04-01 02:35:10 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:35:13.874339 | orchestrator | 2026-04-01 02:35:13 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:35:13.876300 | orchestrator | 2026-04-01 02:35:13 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:35:13.876372 | orchestrator | 2026-04-01 02:35:13 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:35:16.925087 | orchestrator | 2026-04-01 02:35:16 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:35:16.926820 | orchestrator | 2026-04-01 02:35:16 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:35:16.926913 | orchestrator | 2026-04-01 02:35:16 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:35:19.976706 | orchestrator | 2026-04-01 02:35:19 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:35:19.978676 | orchestrator | 2026-04-01 02:35:19 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:35:19.978756 | orchestrator | 2026-04-01 02:35:19 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:35:23.017569 | orchestrator | 2026-04-01 02:35:23 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:35:23.017680 | orchestrator | 2026-04-01 02:35:23 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:35:23.017697 | orchestrator | 2026-04-01 02:35:23 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:35:26.072333 | orchestrator | 2026-04-01 02:35:26 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:35:26.073929 | orchestrator | 2026-04-01 02:35:26 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:35:26.074182 | orchestrator | 2026-04-01 02:35:26 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:35:29.126197 | orchestrator | 2026-04-01 02:35:29 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:35:29.127942 | orchestrator | 2026-04-01 02:35:29 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:35:29.127980 | orchestrator | 2026-04-01 02:35:29 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:35:32.179957 | orchestrator | 2026-04-01 02:35:32 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:35:32.183789 | orchestrator | 2026-04-01 02:35:32 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:35:32.183875 | orchestrator | 2026-04-01 02:35:32 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:35:35.232936 | orchestrator | 2026-04-01 02:35:35 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:35:35.236367 | orchestrator | 2026-04-01 02:35:35 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:35:35.236434 | orchestrator | 2026-04-01 02:35:35 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:35:38.290969 | orchestrator | 2026-04-01 02:35:38 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:35:38.295132 | orchestrator | 2026-04-01 02:35:38 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:35:38.295218 | orchestrator | 2026-04-01 02:35:38 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:35:41.353062 | orchestrator | 2026-04-01 02:35:41 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:35:41.354776 | orchestrator | 2026-04-01 02:35:41 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:35:41.354927 | orchestrator | 2026-04-01 02:35:41 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:35:44.403816 | orchestrator | 2026-04-01 02:35:44 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:35:44.406371 | orchestrator | 2026-04-01 02:35:44 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:35:44.406459 | orchestrator | 2026-04-01 02:35:44 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:35:47.454231 | orchestrator | 2026-04-01 02:35:47 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:35:47.456937 | orchestrator | 2026-04-01 02:35:47 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:35:47.456987 | orchestrator | 2026-04-01 02:35:47 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:35:50.500825 | orchestrator | 2026-04-01 02:35:50 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:35:50.502579 | orchestrator | 2026-04-01 02:35:50 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:35:50.502636 | orchestrator | 2026-04-01 02:35:50 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:35:53.545047 | orchestrator | 2026-04-01 02:35:53 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:35:53.545381 | orchestrator | 2026-04-01 02:35:53 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:35:53.545760 | orchestrator | 2026-04-01 02:35:53 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:35:56.590006 | orchestrator | 2026-04-01 02:35:56 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:35:56.591655 | orchestrator | 2026-04-01 02:35:56 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:35:56.591708 | orchestrator | 2026-04-01 02:35:56 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:35:59.635839 | orchestrator | 2026-04-01 02:35:59 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:35:59.636549 | orchestrator | 2026-04-01 02:35:59 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:35:59.636582 | orchestrator | 2026-04-01 02:35:59 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:36:02.678483 | orchestrator | 2026-04-01 02:36:02 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:36:02.680336 | orchestrator | 2026-04-01 02:36:02 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:36:02.680491 | orchestrator | 2026-04-01 02:36:02 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:36:05.732761 | orchestrator | 2026-04-01 02:36:05 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:36:05.734748 | orchestrator | 2026-04-01 02:36:05 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:36:05.734873 | orchestrator | 2026-04-01 02:36:05 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:36:08.781007 | orchestrator | 2026-04-01 02:36:08 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:36:08.782405 | orchestrator | 2026-04-01 02:36:08 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:36:08.782444 | orchestrator | 2026-04-01 02:36:08 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:36:11.837939 | orchestrator | 2026-04-01 02:36:11 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:36:11.839709 | orchestrator | 2026-04-01 02:36:11 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:36:11.839763 | orchestrator | 2026-04-01 02:36:11 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:36:14.886116 | orchestrator | 2026-04-01 02:36:14 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:36:14.887927 | orchestrator | 2026-04-01 02:36:14 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:36:14.888003 | orchestrator | 2026-04-01 02:36:14 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:36:17.931738 | orchestrator | 2026-04-01 02:36:17 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:36:17.933793 | orchestrator | 2026-04-01 02:36:17 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:36:17.933891 | orchestrator | 2026-04-01 02:36:17 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:36:20.976555 | orchestrator | 2026-04-01 02:36:20 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:36:20.982706 | orchestrator | 2026-04-01 02:36:20 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:36:20.982816 | orchestrator | 2026-04-01 02:36:20 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:36:24.028364 | orchestrator | 2026-04-01 02:36:24 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:36:24.029436 | orchestrator | 2026-04-01 02:36:24 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:36:24.029471 | orchestrator | 2026-04-01 02:36:24 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:36:27.079130 | orchestrator | 2026-04-01 02:36:27 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:36:27.079759 | orchestrator | 2026-04-01 02:36:27 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:36:27.079941 | orchestrator | 2026-04-01 02:36:27 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:36:30.120523 | orchestrator | 2026-04-01 02:36:30 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:36:30.122465 | orchestrator | 2026-04-01 02:36:30 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:36:30.122521 | orchestrator | 2026-04-01 02:36:30 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:36:33.164871 | orchestrator | 2026-04-01 02:36:33 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:36:33.166640 | orchestrator | 2026-04-01 02:36:33 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:36:33.166675 | orchestrator | 2026-04-01 02:36:33 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:36:36.211872 | orchestrator | 2026-04-01 02:36:36 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:36:36.213671 | orchestrator | 2026-04-01 02:36:36 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:36:36.213746 | orchestrator | 2026-04-01 02:36:36 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:36:39.264080 | orchestrator | 2026-04-01 02:36:39 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:36:39.266465 | orchestrator | 2026-04-01 02:36:39 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:36:39.266543 | orchestrator | 2026-04-01 02:36:39 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:36:42.311921 | orchestrator | 2026-04-01 02:36:42 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:36:42.315688 | orchestrator | 2026-04-01 02:36:42 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:36:42.315860 | orchestrator | 2026-04-01 02:36:42 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:36:45.363644 | orchestrator | 2026-04-01 02:36:45 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:36:45.364611 | orchestrator | 2026-04-01 02:36:45 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:36:45.364718 | orchestrator | 2026-04-01 02:36:45 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:36:48.414448 | orchestrator | 2026-04-01 02:36:48 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:36:48.415852 | orchestrator | 2026-04-01 02:36:48 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:36:48.415902 | orchestrator | 2026-04-01 02:36:48 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:36:51.460903 | orchestrator | 2026-04-01 02:36:51 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:36:51.462243 | orchestrator | 2026-04-01 02:36:51 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:36:51.462293 | orchestrator | 2026-04-01 02:36:51 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:36:54.510478 | orchestrator | 2026-04-01 02:36:54 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:36:54.511722 | orchestrator | 2026-04-01 02:36:54 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:36:54.511771 | orchestrator | 2026-04-01 02:36:54 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:36:57.557106 | orchestrator | 2026-04-01 02:36:57 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:36:57.559019 | orchestrator | 2026-04-01 02:36:57 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:36:57.559069 | orchestrator | 2026-04-01 02:36:57 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:37:00.605769 | orchestrator | 2026-04-01 02:37:00 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:37:00.607021 | orchestrator | 2026-04-01 02:37:00 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:37:00.607124 | orchestrator | 2026-04-01 02:37:00 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:37:03.655493 | orchestrator | 2026-04-01 02:37:03 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:37:03.657210 | orchestrator | 2026-04-01 02:37:03 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:37:03.657713 | orchestrator | 2026-04-01 02:37:03 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:37:06.701605 | orchestrator | 2026-04-01 02:37:06 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:37:06.702921 | orchestrator | 2026-04-01 02:37:06 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:37:06.702999 | orchestrator | 2026-04-01 02:37:06 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:37:09.749214 | orchestrator | 2026-04-01 02:37:09 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:37:09.751670 | orchestrator | 2026-04-01 02:37:09 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:37:09.751763 | orchestrator | 2026-04-01 02:37:09 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:37:12.802639 | orchestrator | 2026-04-01 02:37:12 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:37:12.805312 | orchestrator | 2026-04-01 02:37:12 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:37:12.805456 | orchestrator | 2026-04-01 02:37:12 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:37:15.860174 | orchestrator | 2026-04-01 02:37:15 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:37:15.861050 | orchestrator | 2026-04-01 02:37:15 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:37:15.861079 | orchestrator | 2026-04-01 02:37:15 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:37:18.909687 | orchestrator | 2026-04-01 02:37:18 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:37:18.912402 | orchestrator | 2026-04-01 02:37:18 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:37:18.912443 | orchestrator | 2026-04-01 02:37:18 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:37:21.956475 | orchestrator | 2026-04-01 02:37:21 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:37:21.958138 | orchestrator | 2026-04-01 02:37:21 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:37:21.958207 | orchestrator | 2026-04-01 02:37:21 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:37:25.026096 | orchestrator | 2026-04-01 02:37:25 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:37:25.028295 | orchestrator | 2026-04-01 02:37:25 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:37:25.028357 | orchestrator | 2026-04-01 02:37:25 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:37:28.075001 | orchestrator | 2026-04-01 02:37:28 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:37:28.076120 | orchestrator | 2026-04-01 02:37:28 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:37:28.076159 | orchestrator | 2026-04-01 02:37:28 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:37:31.133052 | orchestrator | 2026-04-01 02:37:31 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:37:31.136147 | orchestrator | 2026-04-01 02:37:31 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:37:31.136208 | orchestrator | 2026-04-01 02:37:31 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:37:34.183697 | orchestrator | 2026-04-01 02:37:34 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:37:34.186237 | orchestrator | 2026-04-01 02:37:34 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:37:34.186343 | orchestrator | 2026-04-01 02:37:34 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:37:37.236806 | orchestrator | 2026-04-01 02:37:37 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:37:37.240553 | orchestrator | 2026-04-01 02:37:37 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:37:37.240624 | orchestrator | 2026-04-01 02:37:37 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:37:40.284648 | orchestrator | 2026-04-01 02:37:40 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:37:40.286392 | orchestrator | 2026-04-01 02:37:40 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:37:40.286424 | orchestrator | 2026-04-01 02:37:40 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:37:43.333023 | orchestrator | 2026-04-01 02:37:43 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:37:43.334827 | orchestrator | 2026-04-01 02:37:43 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:37:43.334876 | orchestrator | 2026-04-01 02:37:43 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:37:46.383446 | orchestrator | 2026-04-01 02:37:46 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:37:46.384711 | orchestrator | 2026-04-01 02:37:46 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:37:46.384747 | orchestrator | 2026-04-01 02:37:46 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:37:49.428355 | orchestrator | 2026-04-01 02:37:49 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:37:49.430124 | orchestrator | 2026-04-01 02:37:49 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:37:49.430362 | orchestrator | 2026-04-01 02:37:49 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:37:52.478971 | orchestrator | 2026-04-01 02:37:52 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:37:52.480465 | orchestrator | 2026-04-01 02:37:52 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:37:52.480565 | orchestrator | 2026-04-01 02:37:52 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:37:55.529248 | orchestrator | 2026-04-01 02:37:55 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:37:55.531298 | orchestrator | 2026-04-01 02:37:55 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:37:55.531353 | orchestrator | 2026-04-01 02:37:55 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:37:58.578584 | orchestrator | 2026-04-01 02:37:58 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:37:58.580555 | orchestrator | 2026-04-01 02:37:58 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:37:58.580598 | orchestrator | 2026-04-01 02:37:58 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:38:01.629549 | orchestrator | 2026-04-01 02:38:01 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:38:01.630687 | orchestrator | 2026-04-01 02:38:01 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:38:01.630733 | orchestrator | 2026-04-01 02:38:01 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:38:04.673263 | orchestrator | 2026-04-01 02:38:04 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:38:04.674445 | orchestrator | 2026-04-01 02:38:04 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:38:04.674516 | orchestrator | 2026-04-01 02:38:04 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:38:07.715694 | orchestrator | 2026-04-01 02:38:07 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:38:07.717804 | orchestrator | 2026-04-01 02:38:07 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:38:07.717882 | orchestrator | 2026-04-01 02:38:07 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:38:10.764329 | orchestrator | 2026-04-01 02:38:10 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:38:10.765164 | orchestrator | 2026-04-01 02:38:10 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:38:10.765252 | orchestrator | 2026-04-01 02:38:10 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:38:13.817486 | orchestrator | 2026-04-01 02:38:13 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:38:13.818989 | orchestrator | 2026-04-01 02:38:13 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:38:13.819085 | orchestrator | 2026-04-01 02:38:13 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:38:16.874446 | orchestrator | 2026-04-01 02:38:16 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:38:16.875410 | orchestrator | 2026-04-01 02:38:16 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:38:16.875995 | orchestrator | 2026-04-01 02:38:16 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:38:19.919924 | orchestrator | 2026-04-01 02:38:19 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:38:19.921652 | orchestrator | 2026-04-01 02:38:19 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:38:19.921702 | orchestrator | 2026-04-01 02:38:19 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:38:22.969607 | orchestrator | 2026-04-01 02:38:22 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:38:22.971762 | orchestrator | 2026-04-01 02:38:22 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:38:22.971809 | orchestrator | 2026-04-01 02:38:22 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:38:26.032418 | orchestrator | 2026-04-01 02:38:26 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:38:26.032994 | orchestrator | 2026-04-01 02:38:26 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:38:26.033038 | orchestrator | 2026-04-01 02:38:26 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:38:29.079851 | orchestrator | 2026-04-01 02:38:29 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:38:29.081712 | orchestrator | 2026-04-01 02:38:29 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:38:29.081842 | orchestrator | 2026-04-01 02:38:29 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:38:32.136145 | orchestrator | 2026-04-01 02:38:32 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:38:32.138943 | orchestrator | 2026-04-01 02:38:32 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:38:32.139176 | orchestrator | 2026-04-01 02:38:32 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:38:35.179234 | orchestrator | 2026-04-01 02:38:35 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:38:35.180636 | orchestrator | 2026-04-01 02:38:35 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:38:35.180829 | orchestrator | 2026-04-01 02:38:35 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:38:38.223413 | orchestrator | 2026-04-01 02:38:38 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:38:38.225225 | orchestrator | 2026-04-01 02:38:38 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:38:38.225253 | orchestrator | 2026-04-01 02:38:38 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:38:41.267264 | orchestrator | 2026-04-01 02:38:41 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:38:41.270233 | orchestrator | 2026-04-01 02:38:41 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:38:41.270338 | orchestrator | 2026-04-01 02:38:41 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:38:44.312669 | orchestrator | 2026-04-01 02:38:44 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:38:44.314719 | orchestrator | 2026-04-01 02:38:44 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:38:44.314801 | orchestrator | 2026-04-01 02:38:44 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:38:47.356696 | orchestrator | 2026-04-01 02:38:47 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:38:47.358149 | orchestrator | 2026-04-01 02:38:47 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:38:47.358223 | orchestrator | 2026-04-01 02:38:47 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:38:50.406867 | orchestrator | 2026-04-01 02:38:50 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:38:50.408097 | orchestrator | 2026-04-01 02:38:50 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:38:50.408213 | orchestrator | 2026-04-01 02:38:50 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:38:53.454480 | orchestrator | 2026-04-01 02:38:53 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:38:53.456391 | orchestrator | 2026-04-01 02:38:53 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:38:53.456443 | orchestrator | 2026-04-01 02:38:53 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:38:56.508294 | orchestrator | 2026-04-01 02:38:56 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:38:56.509215 | orchestrator | 2026-04-01 02:38:56 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:38:56.509424 | orchestrator | 2026-04-01 02:38:56 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:38:59.562911 | orchestrator | 2026-04-01 02:38:59 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:38:59.563817 | orchestrator | 2026-04-01 02:38:59 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:38:59.563855 | orchestrator | 2026-04-01 02:38:59 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:39:02.618752 | orchestrator | 2026-04-01 02:39:02 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:39:02.620088 | orchestrator | 2026-04-01 02:39:02 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:39:02.620363 | orchestrator | 2026-04-01 02:39:02 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:39:05.664478 | orchestrator | 2026-04-01 02:39:05 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:39:05.665848 | orchestrator | 2026-04-01 02:39:05 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:39:05.665889 | orchestrator | 2026-04-01 02:39:05 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:39:08.709929 | orchestrator | 2026-04-01 02:39:08 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:39:08.711823 | orchestrator | 2026-04-01 02:39:08 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:39:08.711867 | orchestrator | 2026-04-01 02:39:08 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:39:11.761448 | orchestrator | 2026-04-01 02:39:11 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:39:11.763576 | orchestrator | 2026-04-01 02:39:11 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:39:11.763626 | orchestrator | 2026-04-01 02:39:11 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:39:14.808559 | orchestrator | 2026-04-01 02:39:14 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:39:14.810438 | orchestrator | 2026-04-01 02:39:14 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:39:14.810598 | orchestrator | 2026-04-01 02:39:14 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:39:17.855416 | orchestrator | 2026-04-01 02:39:17 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:39:17.856095 | orchestrator | 2026-04-01 02:39:17 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:39:17.856169 | orchestrator | 2026-04-01 02:39:17 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:39:20.903495 | orchestrator | 2026-04-01 02:39:20 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:39:20.905560 | orchestrator | 2026-04-01 02:39:20 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:39:20.905602 | orchestrator | 2026-04-01 02:39:20 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:39:23.956540 | orchestrator | 2026-04-01 02:39:23 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:39:23.959338 | orchestrator | 2026-04-01 02:39:23 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:39:23.959452 | orchestrator | 2026-04-01 02:39:23 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:39:27.009312 | orchestrator | 2026-04-01 02:39:27 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:39:27.011414 | orchestrator | 2026-04-01 02:39:27 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:39:27.011483 | orchestrator | 2026-04-01 02:39:27 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:39:30.060612 | orchestrator | 2026-04-01 02:39:30 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:39:30.062403 | orchestrator | 2026-04-01 02:39:30 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:39:30.062455 | orchestrator | 2026-04-01 02:39:30 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:39:33.113561 | orchestrator | 2026-04-01 02:39:33 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:39:33.114848 | orchestrator | 2026-04-01 02:39:33 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:39:33.115010 | orchestrator | 2026-04-01 02:39:33 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:39:36.159920 | orchestrator | 2026-04-01 02:39:36 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:39:36.161917 | orchestrator | 2026-04-01 02:39:36 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:39:36.161995 | orchestrator | 2026-04-01 02:39:36 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:39:39.202289 | orchestrator | 2026-04-01 02:39:39 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:39:39.203921 | orchestrator | 2026-04-01 02:39:39 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:39:39.204090 | orchestrator | 2026-04-01 02:39:39 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:39:42.258299 | orchestrator | 2026-04-01 02:39:42 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:39:42.259993 | orchestrator | 2026-04-01 02:39:42 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:39:42.260044 | orchestrator | 2026-04-01 02:39:42 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:39:45.311717 | orchestrator | 2026-04-01 02:39:45 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:39:45.316065 | orchestrator | 2026-04-01 02:39:45 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:39:45.316232 | orchestrator | 2026-04-01 02:39:45 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:39:48.365700 | orchestrator | 2026-04-01 02:39:48 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:39:48.366644 | orchestrator | 2026-04-01 02:39:48 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:39:48.366694 | orchestrator | 2026-04-01 02:39:48 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:39:51.407235 | orchestrator | 2026-04-01 02:39:51 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:39:51.412768 | orchestrator | 2026-04-01 02:39:51 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:39:51.412844 | orchestrator | 2026-04-01 02:39:51 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:39:54.453627 | orchestrator | 2026-04-01 02:39:54 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:39:54.453730 | orchestrator | 2026-04-01 02:39:54 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:39:54.453770 | orchestrator | 2026-04-01 02:39:54 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:39:57.498154 | orchestrator | 2026-04-01 02:39:57 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:39:57.500384 | orchestrator | 2026-04-01 02:39:57 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:39:57.500447 | orchestrator | 2026-04-01 02:39:57 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:40:00.548502 | orchestrator | 2026-04-01 02:40:00 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:40:00.550580 | orchestrator | 2026-04-01 02:40:00 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:40:00.550633 | orchestrator | 2026-04-01 02:40:00 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:40:03.598973 | orchestrator | 2026-04-01 02:40:03 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:40:03.600987 | orchestrator | 2026-04-01 02:40:03 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:40:03.601065 | orchestrator | 2026-04-01 02:40:03 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:40:06.645064 | orchestrator | 2026-04-01 02:40:06 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:40:06.646627 | orchestrator | 2026-04-01 02:40:06 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:40:06.646748 | orchestrator | 2026-04-01 02:40:06 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:40:09.694705 | orchestrator | 2026-04-01 02:40:09 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:40:09.696762 | orchestrator | 2026-04-01 02:40:09 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:40:09.696887 | orchestrator | 2026-04-01 02:40:09 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:40:12.742438 | orchestrator | 2026-04-01 02:40:12 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:40:12.744259 | orchestrator | 2026-04-01 02:40:12 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:40:12.744394 | orchestrator | 2026-04-01 02:40:12 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:40:15.789715 | orchestrator | 2026-04-01 02:40:15 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:40:15.791508 | orchestrator | 2026-04-01 02:40:15 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:40:15.791572 | orchestrator | 2026-04-01 02:40:15 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:40:18.843489 | orchestrator | 2026-04-01 02:40:18 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:40:18.844932 | orchestrator | 2026-04-01 02:40:18 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:40:18.845000 | orchestrator | 2026-04-01 02:40:18 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:40:21.891101 | orchestrator | 2026-04-01 02:40:21 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:40:21.893837 | orchestrator | 2026-04-01 02:40:21 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:40:21.893923 | orchestrator | 2026-04-01 02:40:21 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:40:24.940639 | orchestrator | 2026-04-01 02:40:24 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:40:24.942115 | orchestrator | 2026-04-01 02:40:24 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:40:24.942179 | orchestrator | 2026-04-01 02:40:24 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:40:27.987713 | orchestrator | 2026-04-01 02:40:27 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:40:27.990115 | orchestrator | 2026-04-01 02:40:27 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:40:27.990271 | orchestrator | 2026-04-01 02:40:27 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:40:31.042530 | orchestrator | 2026-04-01 02:40:31 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:40:31.044515 | orchestrator | 2026-04-01 02:40:31 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:40:31.044590 | orchestrator | 2026-04-01 02:40:31 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:40:34.097680 | orchestrator | 2026-04-01 02:40:34 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:40:34.099306 | orchestrator | 2026-04-01 02:40:34 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:40:34.099392 | orchestrator | 2026-04-01 02:40:34 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:40:37.149160 | orchestrator | 2026-04-01 02:40:37 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:40:37.151102 | orchestrator | 2026-04-01 02:40:37 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:40:37.151160 | orchestrator | 2026-04-01 02:40:37 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:40:40.200090 | orchestrator | 2026-04-01 02:40:40 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:40:40.200852 | orchestrator | 2026-04-01 02:40:40 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:40:40.200885 | orchestrator | 2026-04-01 02:40:40 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:40:43.250266 | orchestrator | 2026-04-01 02:40:43 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:40:43.251700 | orchestrator | 2026-04-01 02:40:43 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:40:43.251757 | orchestrator | 2026-04-01 02:40:43 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:40:46.299285 | orchestrator | 2026-04-01 02:40:46 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:40:46.301501 | orchestrator | 2026-04-01 02:40:46 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:40:46.301565 | orchestrator | 2026-04-01 02:40:46 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:40:49.353967 | orchestrator | 2026-04-01 02:40:49 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:40:49.355704 | orchestrator | 2026-04-01 02:40:49 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:40:49.355749 | orchestrator | 2026-04-01 02:40:49 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:40:52.408222 | orchestrator | 2026-04-01 02:40:52 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:40:52.410224 | orchestrator | 2026-04-01 02:40:52 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:40:52.410297 | orchestrator | 2026-04-01 02:40:52 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:40:55.457386 | orchestrator | 2026-04-01 02:40:55 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:40:55.458530 | orchestrator | 2026-04-01 02:40:55 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:40:55.458579 | orchestrator | 2026-04-01 02:40:55 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:40:58.505258 | orchestrator | 2026-04-01 02:40:58 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:40:58.506341 | orchestrator | 2026-04-01 02:40:58 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:40:58.506422 | orchestrator | 2026-04-01 02:40:58 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:41:01.553259 | orchestrator | 2026-04-01 02:41:01 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:41:01.555618 | orchestrator | 2026-04-01 02:41:01 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:41:01.555825 | orchestrator | 2026-04-01 02:41:01 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:41:04.605068 | orchestrator | 2026-04-01 02:41:04 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:41:04.607128 | orchestrator | 2026-04-01 02:41:04 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:41:04.607385 | orchestrator | 2026-04-01 02:41:04 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:41:07.650001 | orchestrator | 2026-04-01 02:41:07 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:41:07.651842 | orchestrator | 2026-04-01 02:41:07 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:41:07.651891 | orchestrator | 2026-04-01 02:41:07 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:41:10.694157 | orchestrator | 2026-04-01 02:41:10 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:41:10.695816 | orchestrator | 2026-04-01 02:41:10 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:41:10.695885 | orchestrator | 2026-04-01 02:41:10 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:41:13.740362 | orchestrator | 2026-04-01 02:41:13 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:41:13.743028 | orchestrator | 2026-04-01 02:41:13 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:41:13.743098 | orchestrator | 2026-04-01 02:41:13 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:41:16.784084 | orchestrator | 2026-04-01 02:41:16 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:41:16.785865 | orchestrator | 2026-04-01 02:41:16 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:41:16.785933 | orchestrator | 2026-04-01 02:41:16 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:41:19.830452 | orchestrator | 2026-04-01 02:41:19 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:41:19.832640 | orchestrator | 2026-04-01 02:41:19 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:41:19.832709 | orchestrator | 2026-04-01 02:41:19 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:41:22.881650 | orchestrator | 2026-04-01 02:41:22 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:41:22.884045 | orchestrator | 2026-04-01 02:41:22 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:41:22.884131 | orchestrator | 2026-04-01 02:41:22 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:41:25.932177 | orchestrator | 2026-04-01 02:41:25 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:41:25.934162 | orchestrator | 2026-04-01 02:41:25 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:41:25.934235 | orchestrator | 2026-04-01 02:41:25 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:41:28.981153 | orchestrator | 2026-04-01 02:41:28 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:41:28.984314 | orchestrator | 2026-04-01 02:41:28 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:41:28.984462 | orchestrator | 2026-04-01 02:41:28 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:41:32.033310 | orchestrator | 2026-04-01 02:41:32 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:41:32.034825 | orchestrator | 2026-04-01 02:41:32 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:41:32.034876 | orchestrator | 2026-04-01 02:41:32 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:41:35.078839 | orchestrator | 2026-04-01 02:41:35 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:41:35.080647 | orchestrator | 2026-04-01 02:41:35 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:41:35.080711 | orchestrator | 2026-04-01 02:41:35 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:41:38.127184 | orchestrator | 2026-04-01 02:41:38 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:41:38.129295 | orchestrator | 2026-04-01 02:41:38 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:41:38.129363 | orchestrator | 2026-04-01 02:41:38 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:41:41.180485 | orchestrator | 2026-04-01 02:41:41 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:41:41.181277 | orchestrator | 2026-04-01 02:41:41 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:41:41.181490 | orchestrator | 2026-04-01 02:41:41 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:41:44.222137 | orchestrator | 2026-04-01 02:41:44 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:41:44.223094 | orchestrator | 2026-04-01 02:41:44 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:41:44.223128 | orchestrator | 2026-04-01 02:41:44 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:41:47.267485 | orchestrator | 2026-04-01 02:41:47 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:41:47.269761 | orchestrator | 2026-04-01 02:41:47 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:41:47.269831 | orchestrator | 2026-04-01 02:41:47 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:41:50.315216 | orchestrator | 2026-04-01 02:41:50 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:41:50.316982 | orchestrator | 2026-04-01 02:41:50 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:41:50.317049 | orchestrator | 2026-04-01 02:41:50 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:41:53.362356 | orchestrator | 2026-04-01 02:41:53 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:41:53.366085 | orchestrator | 2026-04-01 02:41:53 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:41:53.366356 | orchestrator | 2026-04-01 02:41:53 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:41:56.413828 | orchestrator | 2026-04-01 02:41:56 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:41:56.415767 | orchestrator | 2026-04-01 02:41:56 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:41:56.415869 | orchestrator | 2026-04-01 02:41:56 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:41:59.469125 | orchestrator | 2026-04-01 02:41:59 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:41:59.471847 | orchestrator | 2026-04-01 02:41:59 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:41:59.471968 | orchestrator | 2026-04-01 02:41:59 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:42:02.520002 | orchestrator | 2026-04-01 02:42:02 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:42:02.521374 | orchestrator | 2026-04-01 02:42:02 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:42:02.521429 | orchestrator | 2026-04-01 02:42:02 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:42:05.570418 | orchestrator | 2026-04-01 02:42:05 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:42:05.572230 | orchestrator | 2026-04-01 02:42:05 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:42:05.572379 | orchestrator | 2026-04-01 02:42:05 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:42:08.620463 | orchestrator | 2026-04-01 02:42:08 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:42:08.622373 | orchestrator | 2026-04-01 02:42:08 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:42:08.622437 | orchestrator | 2026-04-01 02:42:08 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:42:11.666209 | orchestrator | 2026-04-01 02:42:11 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:42:11.668051 | orchestrator | 2026-04-01 02:42:11 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:42:11.668199 | orchestrator | 2026-04-01 02:42:11 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:42:14.713304 | orchestrator | 2026-04-01 02:42:14 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:42:14.715673 | orchestrator | 2026-04-01 02:42:14 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:42:14.715731 | orchestrator | 2026-04-01 02:42:14 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:42:17.759253 | orchestrator | 2026-04-01 02:42:17 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:42:17.761455 | orchestrator | 2026-04-01 02:42:17 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:42:17.761511 | orchestrator | 2026-04-01 02:42:17 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:42:20.807466 | orchestrator | 2026-04-01 02:42:20 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:42:20.809827 | orchestrator | 2026-04-01 02:42:20 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:42:20.809895 | orchestrator | 2026-04-01 02:42:20 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:42:23.858371 | orchestrator | 2026-04-01 02:42:23 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:42:23.860776 | orchestrator | 2026-04-01 02:42:23 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:42:23.860923 | orchestrator | 2026-04-01 02:42:23 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:42:26.905774 | orchestrator | 2026-04-01 02:42:26 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:42:26.908838 | orchestrator | 2026-04-01 02:42:26 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:42:26.908971 | orchestrator | 2026-04-01 02:42:26 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:42:29.952674 | orchestrator | 2026-04-01 02:42:29 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:42:29.954123 | orchestrator | 2026-04-01 02:42:29 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:42:29.954196 | orchestrator | 2026-04-01 02:42:29 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:42:32.998249 | orchestrator | 2026-04-01 02:42:32 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:42:33.000083 | orchestrator | 2026-04-01 02:42:32 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:42:33.000145 | orchestrator | 2026-04-01 02:42:32 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:42:36.042812 | orchestrator | 2026-04-01 02:42:36 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:42:36.043795 | orchestrator | 2026-04-01 02:42:36 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:42:36.043957 | orchestrator | 2026-04-01 02:42:36 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:42:39.092540 | orchestrator | 2026-04-01 02:42:39 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:42:39.094514 | orchestrator | 2026-04-01 02:42:39 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:42:39.094587 | orchestrator | 2026-04-01 02:42:39 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:42:42.139657 | orchestrator | 2026-04-01 02:42:42 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:42:42.141009 | orchestrator | 2026-04-01 02:42:42 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:42:42.141050 | orchestrator | 2026-04-01 02:42:42 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:42:45.185920 | orchestrator | 2026-04-01 02:42:45 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:42:45.186870 | orchestrator | 2026-04-01 02:42:45 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:42:45.186942 | orchestrator | 2026-04-01 02:42:45 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:42:48.232992 | orchestrator | 2026-04-01 02:42:48 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:42:48.234942 | orchestrator | 2026-04-01 02:42:48 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:42:48.235004 | orchestrator | 2026-04-01 02:42:48 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:42:51.276274 | orchestrator | 2026-04-01 02:42:51 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:42:51.277702 | orchestrator | 2026-04-01 02:42:51 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:42:51.277908 | orchestrator | 2026-04-01 02:42:51 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:42:54.320247 | orchestrator | 2026-04-01 02:42:54 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:42:54.320360 | orchestrator | 2026-04-01 02:42:54 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:42:54.320377 | orchestrator | 2026-04-01 02:42:54 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:42:57.362602 | orchestrator | 2026-04-01 02:42:57 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:42:57.364506 | orchestrator | 2026-04-01 02:42:57 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:42:57.364573 | orchestrator | 2026-04-01 02:42:57 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:43:00.406460 | orchestrator | 2026-04-01 02:43:00 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:43:00.408552 | orchestrator | 2026-04-01 02:43:00 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:43:00.408635 | orchestrator | 2026-04-01 02:43:00 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:43:03.455405 | orchestrator | 2026-04-01 02:43:03 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:43:03.457975 | orchestrator | 2026-04-01 02:43:03 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:43:03.458065 | orchestrator | 2026-04-01 02:43:03 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:43:06.503154 | orchestrator | 2026-04-01 02:43:06 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:43:06.504091 | orchestrator | 2026-04-01 02:43:06 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:43:06.504348 | orchestrator | 2026-04-01 02:43:06 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:43:09.547292 | orchestrator | 2026-04-01 02:43:09 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:43:09.548207 | orchestrator | 2026-04-01 02:43:09 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:43:09.548283 | orchestrator | 2026-04-01 02:43:09 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:43:12.585987 | orchestrator | 2026-04-01 02:43:12 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:43:12.587471 | orchestrator | 2026-04-01 02:43:12 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:43:12.587612 | orchestrator | 2026-04-01 02:43:12 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:43:15.634207 | orchestrator | 2026-04-01 02:43:15 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:43:15.636153 | orchestrator | 2026-04-01 02:43:15 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:43:15.636274 | orchestrator | 2026-04-01 02:43:15 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:43:18.678623 | orchestrator | 2026-04-01 02:43:18 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:43:18.679965 | orchestrator | 2026-04-01 02:43:18 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:43:18.680009 | orchestrator | 2026-04-01 02:43:18 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:43:21.726744 | orchestrator | 2026-04-01 02:43:21 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:43:21.728819 | orchestrator | 2026-04-01 02:43:21 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:43:21.728983 | orchestrator | 2026-04-01 02:43:21 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:43:24.773644 | orchestrator | 2026-04-01 02:43:24 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:43:24.774832 | orchestrator | 2026-04-01 02:43:24 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:43:24.774910 | orchestrator | 2026-04-01 02:43:24 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:43:27.816186 | orchestrator | 2026-04-01 02:43:27 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:43:27.818771 | orchestrator | 2026-04-01 02:43:27 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:43:27.818897 | orchestrator | 2026-04-01 02:43:27 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:43:30.860070 | orchestrator | 2026-04-01 02:43:30 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:43:30.862492 | orchestrator | 2026-04-01 02:43:30 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:43:30.862559 | orchestrator | 2026-04-01 02:43:30 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:43:33.909990 | orchestrator | 2026-04-01 02:43:33 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:43:33.911652 | orchestrator | 2026-04-01 02:43:33 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:43:33.911711 | orchestrator | 2026-04-01 02:43:33 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:43:36.959745 | orchestrator | 2026-04-01 02:43:36 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:43:36.960772 | orchestrator | 2026-04-01 02:43:36 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:43:36.960896 | orchestrator | 2026-04-01 02:43:36 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:43:40.002585 | orchestrator | 2026-04-01 02:43:39 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:43:40.004369 | orchestrator | 2026-04-01 02:43:40 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:43:40.004444 | orchestrator | 2026-04-01 02:43:40 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:43:43.044873 | orchestrator | 2026-04-01 02:43:43 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:43:43.046393 | orchestrator | 2026-04-01 02:43:43 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:43:43.046447 | orchestrator | 2026-04-01 02:43:43 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:43:46.095790 | orchestrator | 2026-04-01 02:43:46 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:43:46.097161 | orchestrator | 2026-04-01 02:43:46 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:43:46.097390 | orchestrator | 2026-04-01 02:43:46 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:43:49.147062 | orchestrator | 2026-04-01 02:43:49 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:43:49.150391 | orchestrator | 2026-04-01 02:43:49 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:43:49.150478 | orchestrator | 2026-04-01 02:43:49 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:43:52.191135 | orchestrator | 2026-04-01 02:43:52 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:43:52.191286 | orchestrator | 2026-04-01 02:43:52 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:43:52.191303 | orchestrator | 2026-04-01 02:43:52 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:43:55.233326 | orchestrator | 2026-04-01 02:43:55 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:43:55.237168 | orchestrator | 2026-04-01 02:43:55 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:43:55.237680 | orchestrator | 2026-04-01 02:43:55 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:43:58.287408 | orchestrator | 2026-04-01 02:43:58 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:43:58.289480 | orchestrator | 2026-04-01 02:43:58 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:43:58.289584 | orchestrator | 2026-04-01 02:43:58 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:44:01.330896 | orchestrator | 2026-04-01 02:44:01 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:44:01.333051 | orchestrator | 2026-04-01 02:44:01 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:44:01.333101 | orchestrator | 2026-04-01 02:44:01 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:44:04.379790 | orchestrator | 2026-04-01 02:44:04 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:44:04.382981 | orchestrator | 2026-04-01 02:44:04 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:44:04.383065 | orchestrator | 2026-04-01 02:44:04 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:44:07.429008 | orchestrator | 2026-04-01 02:44:07 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:44:07.430500 | orchestrator | 2026-04-01 02:44:07 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:44:07.430539 | orchestrator | 2026-04-01 02:44:07 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:44:10.480671 | orchestrator | 2026-04-01 02:44:10 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:44:10.484069 | orchestrator | 2026-04-01 02:44:10 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:44:10.484173 | orchestrator | 2026-04-01 02:44:10 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:44:13.536696 | orchestrator | 2026-04-01 02:44:13 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:44:13.539454 | orchestrator | 2026-04-01 02:44:13 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:44:13.539568 | orchestrator | 2026-04-01 02:44:13 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:44:16.586715 | orchestrator | 2026-04-01 02:44:16 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:44:16.588905 | orchestrator | 2026-04-01 02:44:16 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:44:16.588996 | orchestrator | 2026-04-01 02:44:16 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:44:19.638524 | orchestrator | 2026-04-01 02:44:19 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:44:19.641182 | orchestrator | 2026-04-01 02:44:19 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:44:19.641259 | orchestrator | 2026-04-01 02:44:19 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:44:22.685936 | orchestrator | 2026-04-01 02:44:22 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:44:22.688421 | orchestrator | 2026-04-01 02:44:22 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:44:22.688452 | orchestrator | 2026-04-01 02:44:22 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:44:25.733583 | orchestrator | 2026-04-01 02:44:25 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:44:25.735518 | orchestrator | 2026-04-01 02:44:25 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:44:25.735564 | orchestrator | 2026-04-01 02:44:25 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:44:28.784290 | orchestrator | 2026-04-01 02:44:28 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:44:28.785614 | orchestrator | 2026-04-01 02:44:28 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:44:28.785848 | orchestrator | 2026-04-01 02:44:28 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:44:31.834484 | orchestrator | 2026-04-01 02:44:31 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:44:31.836693 | orchestrator | 2026-04-01 02:44:31 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:44:31.836739 | orchestrator | 2026-04-01 02:44:31 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:44:34.880074 | orchestrator | 2026-04-01 02:44:34 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:44:34.882207 | orchestrator | 2026-04-01 02:44:34 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:44:34.882279 | orchestrator | 2026-04-01 02:44:34 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:44:37.923231 | orchestrator | 2026-04-01 02:44:37 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:44:37.925709 | orchestrator | 2026-04-01 02:44:37 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:44:37.925824 | orchestrator | 2026-04-01 02:44:37 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:44:40.967456 | orchestrator | 2026-04-01 02:44:40 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:44:40.969416 | orchestrator | 2026-04-01 02:44:40 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:44:40.969467 | orchestrator | 2026-04-01 02:44:40 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:44:44.019476 | orchestrator | 2026-04-01 02:44:44 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:44:44.021299 | orchestrator | 2026-04-01 02:44:44 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:44:44.021378 | orchestrator | 2026-04-01 02:44:44 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:44:47.068462 | orchestrator | 2026-04-01 02:44:47 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:44:47.070214 | orchestrator | 2026-04-01 02:44:47 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:44:47.070336 | orchestrator | 2026-04-01 02:44:47 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:44:50.120206 | orchestrator | 2026-04-01 02:44:50 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:44:50.120440 | orchestrator | 2026-04-01 02:44:50 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:44:50.120738 | orchestrator | 2026-04-01 02:44:50 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:44:53.169112 | orchestrator | 2026-04-01 02:44:53 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:44:53.171661 | orchestrator | 2026-04-01 02:44:53 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:44:53.171737 | orchestrator | 2026-04-01 02:44:53 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:44:56.220717 | orchestrator | 2026-04-01 02:44:56 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:44:56.221960 | orchestrator | 2026-04-01 02:44:56 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:44:56.222305 | orchestrator | 2026-04-01 02:44:56 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:44:59.263689 | orchestrator | 2026-04-01 02:44:59 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:44:59.265302 | orchestrator | 2026-04-01 02:44:59 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:44:59.265354 | orchestrator | 2026-04-01 02:44:59 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:45:02.313626 | orchestrator | 2026-04-01 02:45:02 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:45:02.314487 | orchestrator | 2026-04-01 02:45:02 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:45:02.314580 | orchestrator | 2026-04-01 02:45:02 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:45:05.360262 | orchestrator | 2026-04-01 02:45:05 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:45:05.361992 | orchestrator | 2026-04-01 02:45:05 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:45:05.362075 | orchestrator | 2026-04-01 02:45:05 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:45:08.411462 | orchestrator | 2026-04-01 02:45:08 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:45:08.413195 | orchestrator | 2026-04-01 02:45:08 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:45:08.413262 | orchestrator | 2026-04-01 02:45:08 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:45:11.463214 | orchestrator | 2026-04-01 02:45:11 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:45:11.464951 | orchestrator | 2026-04-01 02:45:11 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:45:11.465006 | orchestrator | 2026-04-01 02:45:11 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:45:14.504452 | orchestrator | 2026-04-01 02:45:14 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:45:14.507065 | orchestrator | 2026-04-01 02:45:14 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:45:14.507209 | orchestrator | 2026-04-01 02:45:14 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:45:17.543415 | orchestrator | 2026-04-01 02:45:17 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:45:17.544836 | orchestrator | 2026-04-01 02:45:17 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:45:17.544885 | orchestrator | 2026-04-01 02:45:17 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:45:20.588141 | orchestrator | 2026-04-01 02:45:20 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:45:20.589922 | orchestrator | 2026-04-01 02:45:20 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:45:20.589969 | orchestrator | 2026-04-01 02:45:20 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:45:23.632090 | orchestrator | 2026-04-01 02:45:23 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:45:23.633877 | orchestrator | 2026-04-01 02:45:23 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:45:23.634418 | orchestrator | 2026-04-01 02:45:23 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:45:26.682625 | orchestrator | 2026-04-01 02:45:26 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:45:26.684429 | orchestrator | 2026-04-01 02:45:26 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:45:26.684635 | orchestrator | 2026-04-01 02:45:26 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:45:29.730998 | orchestrator | 2026-04-01 02:45:29 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:45:29.732893 | orchestrator | 2026-04-01 02:45:29 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:45:29.732959 | orchestrator | 2026-04-01 02:45:29 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:45:32.781799 | orchestrator | 2026-04-01 02:45:32 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:45:32.783993 | orchestrator | 2026-04-01 02:45:32 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:45:32.784055 | orchestrator | 2026-04-01 02:45:32 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:45:35.834554 | orchestrator | 2026-04-01 02:45:35 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:45:35.836836 | orchestrator | 2026-04-01 02:45:35 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:45:35.836895 | orchestrator | 2026-04-01 02:45:35 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:45:38.889230 | orchestrator | 2026-04-01 02:45:38 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:45:38.890880 | orchestrator | 2026-04-01 02:45:38 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:45:38.890940 | orchestrator | 2026-04-01 02:45:38 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:45:41.944594 | orchestrator | 2026-04-01 02:45:41 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:45:41.947161 | orchestrator | 2026-04-01 02:45:41 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:45:41.947244 | orchestrator | 2026-04-01 02:45:41 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:45:44.992007 | orchestrator | 2026-04-01 02:45:44 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:45:44.993477 | orchestrator | 2026-04-01 02:45:44 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:45:44.993615 | orchestrator | 2026-04-01 02:45:44 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:45:48.050095 | orchestrator | 2026-04-01 02:45:48 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:45:48.051424 | orchestrator | 2026-04-01 02:45:48 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:45:48.051468 | orchestrator | 2026-04-01 02:45:48 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:45:51.098889 | orchestrator | 2026-04-01 02:45:51 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:45:51.101165 | orchestrator | 2026-04-01 02:45:51 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:45:51.101228 | orchestrator | 2026-04-01 02:45:51 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:45:54.150334 | orchestrator | 2026-04-01 02:45:54 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:45:54.151986 | orchestrator | 2026-04-01 02:45:54 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:45:54.152023 | orchestrator | 2026-04-01 02:45:54 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:45:57.198175 | orchestrator | 2026-04-01 02:45:57 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:45:57.198756 | orchestrator | 2026-04-01 02:45:57 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:45:57.198828 | orchestrator | 2026-04-01 02:45:57 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:46:00.250912 | orchestrator | 2026-04-01 02:46:00 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:46:00.254639 | orchestrator | 2026-04-01 02:46:00 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:46:00.254713 | orchestrator | 2026-04-01 02:46:00 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:46:03.302494 | orchestrator | 2026-04-01 02:46:03 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:46:03.304528 | orchestrator | 2026-04-01 02:46:03 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:46:03.304569 | orchestrator | 2026-04-01 02:46:03 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:46:06.354724 | orchestrator | 2026-04-01 02:46:06 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:46:06.356127 | orchestrator | 2026-04-01 02:46:06 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:46:06.356249 | orchestrator | 2026-04-01 02:46:06 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:46:09.402437 | orchestrator | 2026-04-01 02:46:09 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:46:09.402595 | orchestrator | 2026-04-01 02:46:09 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:46:09.402613 | orchestrator | 2026-04-01 02:46:09 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:46:12.438866 | orchestrator | 2026-04-01 02:46:12 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:46:12.441205 | orchestrator | 2026-04-01 02:46:12 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:46:12.441342 | orchestrator | 2026-04-01 02:46:12 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:46:15.485896 | orchestrator | 2026-04-01 02:46:15 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:46:15.487327 | orchestrator | 2026-04-01 02:46:15 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:46:15.487376 | orchestrator | 2026-04-01 02:46:15 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:46:18.543441 | orchestrator | 2026-04-01 02:46:18 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:46:18.545397 | orchestrator | 2026-04-01 02:46:18 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:46:18.545501 | orchestrator | 2026-04-01 02:46:18 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:46:21.590844 | orchestrator | 2026-04-01 02:46:21 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:46:21.592099 | orchestrator | 2026-04-01 02:46:21 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:46:21.592115 | orchestrator | 2026-04-01 02:46:21 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:46:24.647157 | orchestrator | 2026-04-01 02:46:24 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:46:24.649495 | orchestrator | 2026-04-01 02:46:24 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:46:24.649588 | orchestrator | 2026-04-01 02:46:24 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:46:27.698539 | orchestrator | 2026-04-01 02:46:27 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:46:27.700036 | orchestrator | 2026-04-01 02:46:27 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:46:27.700109 | orchestrator | 2026-04-01 02:46:27 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:46:30.749379 | orchestrator | 2026-04-01 02:46:30 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:46:30.750928 | orchestrator | 2026-04-01 02:46:30 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:46:30.750973 | orchestrator | 2026-04-01 02:46:30 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:46:33.801905 | orchestrator | 2026-04-01 02:46:33 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:46:33.803635 | orchestrator | 2026-04-01 02:46:33 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:46:33.803678 | orchestrator | 2026-04-01 02:46:33 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:46:36.855720 | orchestrator | 2026-04-01 02:46:36 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:46:36.857696 | orchestrator | 2026-04-01 02:46:36 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:46:36.857760 | orchestrator | 2026-04-01 02:46:36 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:46:39.907047 | orchestrator | 2026-04-01 02:46:39 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:46:39.908523 | orchestrator | 2026-04-01 02:46:39 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:46:39.908569 | orchestrator | 2026-04-01 02:46:39 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:46:42.960889 | orchestrator | 2026-04-01 02:46:42 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:46:42.962450 | orchestrator | 2026-04-01 02:46:42 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:46:42.962510 | orchestrator | 2026-04-01 02:46:42 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:46:46.016894 | orchestrator | 2026-04-01 02:46:46 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:46:46.018750 | orchestrator | 2026-04-01 02:46:46 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:46:46.018797 | orchestrator | 2026-04-01 02:46:46 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:46:49.066840 | orchestrator | 2026-04-01 02:46:49 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:46:49.069267 | orchestrator | 2026-04-01 02:46:49 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:46:49.069347 | orchestrator | 2026-04-01 02:46:49 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:46:52.119590 | orchestrator | 2026-04-01 02:46:52 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:46:52.124659 | orchestrator | 2026-04-01 02:46:52 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:46:52.124733 | orchestrator | 2026-04-01 02:46:52 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:46:55.170118 | orchestrator | 2026-04-01 02:46:55 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:46:55.171163 | orchestrator | 2026-04-01 02:46:55 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:46:55.171202 | orchestrator | 2026-04-01 02:46:55 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:46:58.211455 | orchestrator | 2026-04-01 02:46:58 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:46:58.212356 | orchestrator | 2026-04-01 02:46:58 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:46:58.212386 | orchestrator | 2026-04-01 02:46:58 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:47:01.249479 | orchestrator | 2026-04-01 02:47:01 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:47:01.252442 | orchestrator | 2026-04-01 02:47:01 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:47:01.252526 | orchestrator | 2026-04-01 02:47:01 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:47:04.299341 | orchestrator | 2026-04-01 02:47:04 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:47:04.301105 | orchestrator | 2026-04-01 02:47:04 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:47:04.301128 | orchestrator | 2026-04-01 02:47:04 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:47:07.350329 | orchestrator | 2026-04-01 02:47:07 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:47:07.351765 | orchestrator | 2026-04-01 02:47:07 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:47:07.352245 | orchestrator | 2026-04-01 02:47:07 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:47:10.400713 | orchestrator | 2026-04-01 02:47:10 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:47:10.402122 | orchestrator | 2026-04-01 02:47:10 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:47:10.402209 | orchestrator | 2026-04-01 02:47:10 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:47:13.451475 | orchestrator | 2026-04-01 02:47:13 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:47:13.453099 | orchestrator | 2026-04-01 02:47:13 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:47:13.453179 | orchestrator | 2026-04-01 02:47:13 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:47:16.502112 | orchestrator | 2026-04-01 02:47:16 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:47:16.503311 | orchestrator | 2026-04-01 02:47:16 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:47:16.503338 | orchestrator | 2026-04-01 02:47:16 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:47:19.556347 | orchestrator | 2026-04-01 02:47:19 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:47:19.558011 | orchestrator | 2026-04-01 02:47:19 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:47:19.558091 | orchestrator | 2026-04-01 02:47:19 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:47:22.611169 | orchestrator | 2026-04-01 02:47:22 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:47:22.614765 | orchestrator | 2026-04-01 02:47:22 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:47:22.614877 | orchestrator | 2026-04-01 02:47:22 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:47:25.666093 | orchestrator | 2026-04-01 02:47:25 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:47:25.668611 | orchestrator | 2026-04-01 02:47:25 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:47:25.668679 | orchestrator | 2026-04-01 02:47:25 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:47:28.722213 | orchestrator | 2026-04-01 02:47:28 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:47:28.723998 | orchestrator | 2026-04-01 02:47:28 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:47:28.724052 | orchestrator | 2026-04-01 02:47:28 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:47:31.771947 | orchestrator | 2026-04-01 02:47:31 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:47:31.773884 | orchestrator | 2026-04-01 02:47:31 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:47:31.773932 | orchestrator | 2026-04-01 02:47:31 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:47:34.819062 | orchestrator | 2026-04-01 02:47:34 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:47:34.821333 | orchestrator | 2026-04-01 02:47:34 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:47:34.821549 | orchestrator | 2026-04-01 02:47:34 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:47:37.860493 | orchestrator | 2026-04-01 02:47:37 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:47:37.862668 | orchestrator | 2026-04-01 02:47:37 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:47:37.862727 | orchestrator | 2026-04-01 02:47:37 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:47:40.906613 | orchestrator | 2026-04-01 02:47:40 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:47:40.909224 | orchestrator | 2026-04-01 02:47:40 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:47:40.909434 | orchestrator | 2026-04-01 02:47:40 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:47:43.957702 | orchestrator | 2026-04-01 02:47:43 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:47:43.959622 | orchestrator | 2026-04-01 02:47:43 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:47:43.959660 | orchestrator | 2026-04-01 02:47:43 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:47:47.010158 | orchestrator | 2026-04-01 02:47:47 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:47:47.011639 | orchestrator | 2026-04-01 02:47:47 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:47:47.011858 | orchestrator | 2026-04-01 02:47:47 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:47:50.057467 | orchestrator | 2026-04-01 02:47:50 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:47:50.061158 | orchestrator | 2026-04-01 02:47:50 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:47:50.061246 | orchestrator | 2026-04-01 02:47:50 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:47:53.109947 | orchestrator | 2026-04-01 02:47:53 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:47:53.110631 | orchestrator | 2026-04-01 02:47:53 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:47:53.110756 | orchestrator | 2026-04-01 02:47:53 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:47:56.165717 | orchestrator | 2026-04-01 02:47:56 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:47:56.167010 | orchestrator | 2026-04-01 02:47:56 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:47:56.167071 | orchestrator | 2026-04-01 02:47:56 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:47:59.218969 | orchestrator | 2026-04-01 02:47:59 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:47:59.221218 | orchestrator | 2026-04-01 02:47:59 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:47:59.221260 | orchestrator | 2026-04-01 02:47:59 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:48:02.272250 | orchestrator | 2026-04-01 02:48:02 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:48:02.273925 | orchestrator | 2026-04-01 02:48:02 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:48:02.273950 | orchestrator | 2026-04-01 02:48:02 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:48:05.320135 | orchestrator | 2026-04-01 02:48:05 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:48:05.321401 | orchestrator | 2026-04-01 02:48:05 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:48:05.321457 | orchestrator | 2026-04-01 02:48:05 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:48:08.368060 | orchestrator | 2026-04-01 02:48:08 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:48:08.369638 | orchestrator | 2026-04-01 02:48:08 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:48:08.369715 | orchestrator | 2026-04-01 02:48:08 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:48:11.415379 | orchestrator | 2026-04-01 02:48:11 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:48:11.417012 | orchestrator | 2026-04-01 02:48:11 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:48:11.417095 | orchestrator | 2026-04-01 02:48:11 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:48:14.462290 | orchestrator | 2026-04-01 02:48:14 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:48:14.464775 | orchestrator | 2026-04-01 02:48:14 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:48:14.464894 | orchestrator | 2026-04-01 02:48:14 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:48:17.510328 | orchestrator | 2026-04-01 02:48:17 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:48:17.512778 | orchestrator | 2026-04-01 02:48:17 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:48:17.512837 | orchestrator | 2026-04-01 02:48:17 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:48:20.564387 | orchestrator | 2026-04-01 02:48:20 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:48:20.566410 | orchestrator | 2026-04-01 02:48:20 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:48:20.566459 | orchestrator | 2026-04-01 02:48:20 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:48:23.616395 | orchestrator | 2026-04-01 02:48:23 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:48:23.618338 | orchestrator | 2026-04-01 02:48:23 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:48:23.618401 | orchestrator | 2026-04-01 02:48:23 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:48:26.665387 | orchestrator | 2026-04-01 02:48:26 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:48:26.666923 | orchestrator | 2026-04-01 02:48:26 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:48:26.667060 | orchestrator | 2026-04-01 02:48:26 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:48:29.714139 | orchestrator | 2026-04-01 02:48:29 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:48:29.716375 | orchestrator | 2026-04-01 02:48:29 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:48:29.716646 | orchestrator | 2026-04-01 02:48:29 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:48:32.765795 | orchestrator | 2026-04-01 02:48:32 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:48:32.767080 | orchestrator | 2026-04-01 02:48:32 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:48:32.767131 | orchestrator | 2026-04-01 02:48:32 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:48:35.813360 | orchestrator | 2026-04-01 02:48:35 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:48:35.815422 | orchestrator | 2026-04-01 02:48:35 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:48:35.815606 | orchestrator | 2026-04-01 02:48:35 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:48:38.864722 | orchestrator | 2026-04-01 02:48:38 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:48:38.866520 | orchestrator | 2026-04-01 02:48:38 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:48:38.866606 | orchestrator | 2026-04-01 02:48:38 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:48:41.913521 | orchestrator | 2026-04-01 02:48:41 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:48:41.914843 | orchestrator | 2026-04-01 02:48:41 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:48:41.914885 | orchestrator | 2026-04-01 02:48:41 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:48:44.969729 | orchestrator | 2026-04-01 02:48:44 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:48:44.971574 | orchestrator | 2026-04-01 02:48:44 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:48:44.971984 | orchestrator | 2026-04-01 02:48:44 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:48:48.020469 | orchestrator | 2026-04-01 02:48:48 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:48:48.023662 | orchestrator | 2026-04-01 02:48:48 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:48:48.023743 | orchestrator | 2026-04-01 02:48:48 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:48:51.069308 | orchestrator | 2026-04-01 02:48:51 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:48:51.070782 | orchestrator | 2026-04-01 02:48:51 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:48:51.070864 | orchestrator | 2026-04-01 02:48:51 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:48:54.116579 | orchestrator | 2026-04-01 02:48:54 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:48:54.118771 | orchestrator | 2026-04-01 02:48:54 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:48:54.118843 | orchestrator | 2026-04-01 02:48:54 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:48:57.167800 | orchestrator | 2026-04-01 02:48:57 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:48:57.170767 | orchestrator | 2026-04-01 02:48:57 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:48:57.170855 | orchestrator | 2026-04-01 02:48:57 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:49:00.214266 | orchestrator | 2026-04-01 02:49:00 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:49:00.214420 | orchestrator | 2026-04-01 02:49:00 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:49:00.214447 | orchestrator | 2026-04-01 02:49:00 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:49:03.261902 | orchestrator | 2026-04-01 02:49:03 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:49:03.263863 | orchestrator | 2026-04-01 02:49:03 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:49:03.263929 | orchestrator | 2026-04-01 02:49:03 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:49:06.311044 | orchestrator | 2026-04-01 02:49:06 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:49:06.312659 | orchestrator | 2026-04-01 02:49:06 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:49:06.312697 | orchestrator | 2026-04-01 02:49:06 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:49:09.367430 | orchestrator | 2026-04-01 02:49:09 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:49:09.369111 | orchestrator | 2026-04-01 02:49:09 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:49:09.369384 | orchestrator | 2026-04-01 02:49:09 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:49:12.421249 | orchestrator | 2026-04-01 02:49:12 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:49:12.423629 | orchestrator | 2026-04-01 02:49:12 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:49:12.423698 | orchestrator | 2026-04-01 02:49:12 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:49:15.471533 | orchestrator | 2026-04-01 02:49:15 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:49:15.473700 | orchestrator | 2026-04-01 02:49:15 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:49:15.474103 | orchestrator | 2026-04-01 02:49:15 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:49:18.526472 | orchestrator | 2026-04-01 02:49:18 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:49:18.529381 | orchestrator | 2026-04-01 02:49:18 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:49:18.529428 | orchestrator | 2026-04-01 02:49:18 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:49:21.577042 | orchestrator | 2026-04-01 02:49:21 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:49:21.579131 | orchestrator | 2026-04-01 02:49:21 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:49:21.579169 | orchestrator | 2026-04-01 02:49:21 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:49:24.629355 | orchestrator | 2026-04-01 02:49:24 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:49:24.630607 | orchestrator | 2026-04-01 02:49:24 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:49:24.630667 | orchestrator | 2026-04-01 02:49:24 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:49:27.679557 | orchestrator | 2026-04-01 02:49:27 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:49:27.680276 | orchestrator | 2026-04-01 02:49:27 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:49:27.680329 | orchestrator | 2026-04-01 02:49:27 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:49:30.730554 | orchestrator | 2026-04-01 02:49:30 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:49:30.732580 | orchestrator | 2026-04-01 02:49:30 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:49:30.732624 | orchestrator | 2026-04-01 02:49:30 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:49:33.786392 | orchestrator | 2026-04-01 02:49:33 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:49:33.787285 | orchestrator | 2026-04-01 02:49:33 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:49:33.787520 | orchestrator | 2026-04-01 02:49:33 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:49:36.839935 | orchestrator | 2026-04-01 02:49:36 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:49:36.841772 | orchestrator | 2026-04-01 02:49:36 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:49:36.841863 | orchestrator | 2026-04-01 02:49:36 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:49:39.889636 | orchestrator | 2026-04-01 02:49:39 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:49:39.891137 | orchestrator | 2026-04-01 02:49:39 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:49:39.891238 | orchestrator | 2026-04-01 02:49:39 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:49:42.937152 | orchestrator | 2026-04-01 02:49:42 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:49:42.939522 | orchestrator | 2026-04-01 02:49:42 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:49:42.939568 | orchestrator | 2026-04-01 02:49:42 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:49:45.987749 | orchestrator | 2026-04-01 02:49:45 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:49:45.989888 | orchestrator | 2026-04-01 02:49:45 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:49:45.990080 | orchestrator | 2026-04-01 02:49:45 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:49:49.043923 | orchestrator | 2026-04-01 02:49:49 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:49:49.045694 | orchestrator | 2026-04-01 02:49:49 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:49:49.045796 | orchestrator | 2026-04-01 02:49:49 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:49:52.093306 | orchestrator | 2026-04-01 02:49:52 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:49:52.095595 | orchestrator | 2026-04-01 02:49:52 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:49:52.095691 | orchestrator | 2026-04-01 02:49:52 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:49:55.137126 | orchestrator | 2026-04-01 02:49:55 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:49:55.139109 | orchestrator | 2026-04-01 02:49:55 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:49:55.139217 | orchestrator | 2026-04-01 02:49:55 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:49:58.182699 | orchestrator | 2026-04-01 02:49:58 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:49:58.185304 | orchestrator | 2026-04-01 02:49:58 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:49:58.185364 | orchestrator | 2026-04-01 02:49:58 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:50:01.227719 | orchestrator | 2026-04-01 02:50:01 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:50:01.229511 | orchestrator | 2026-04-01 02:50:01 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:50:01.229542 | orchestrator | 2026-04-01 02:50:01 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:50:04.281215 | orchestrator | 2026-04-01 02:50:04 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:50:04.283387 | orchestrator | 2026-04-01 02:50:04 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:50:04.283480 | orchestrator | 2026-04-01 02:50:04 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:50:07.333191 | orchestrator | 2026-04-01 02:50:07 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:50:07.334513 | orchestrator | 2026-04-01 02:50:07 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:50:07.334554 | orchestrator | 2026-04-01 02:50:07 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:50:10.381330 | orchestrator | 2026-04-01 02:50:10 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:50:10.383163 | orchestrator | 2026-04-01 02:50:10 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:50:10.383214 | orchestrator | 2026-04-01 02:50:10 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:50:13.437664 | orchestrator | 2026-04-01 02:50:13 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:50:13.439269 | orchestrator | 2026-04-01 02:50:13 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:50:13.439348 | orchestrator | 2026-04-01 02:50:13 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:50:16.488632 | orchestrator | 2026-04-01 02:50:16 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:50:16.490723 | orchestrator | 2026-04-01 02:50:16 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:50:16.490976 | orchestrator | 2026-04-01 02:50:16 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:50:19.537884 | orchestrator | 2026-04-01 02:50:19 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:50:19.539262 | orchestrator | 2026-04-01 02:50:19 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:50:19.539324 | orchestrator | 2026-04-01 02:50:19 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:50:22.589547 | orchestrator | 2026-04-01 02:50:22 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:50:22.590585 | orchestrator | 2026-04-01 02:50:22 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:50:22.590714 | orchestrator | 2026-04-01 02:50:22 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:50:25.639910 | orchestrator | 2026-04-01 02:50:25 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:50:25.641997 | orchestrator | 2026-04-01 02:50:25 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:50:25.643470 | orchestrator | 2026-04-01 02:50:25 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:50:28.685873 | orchestrator | 2026-04-01 02:50:28 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:50:28.687421 | orchestrator | 2026-04-01 02:50:28 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:50:28.687569 | orchestrator | 2026-04-01 02:50:28 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:50:31.728199 | orchestrator | 2026-04-01 02:50:31 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:50:31.729996 | orchestrator | 2026-04-01 02:50:31 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:50:31.730152 | orchestrator | 2026-04-01 02:50:31 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:50:34.782634 | orchestrator | 2026-04-01 02:50:34 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:50:34.784896 | orchestrator | 2026-04-01 02:50:34 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:50:34.784984 | orchestrator | 2026-04-01 02:50:34 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:50:37.835486 | orchestrator | 2026-04-01 02:50:37 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:50:37.836662 | orchestrator | 2026-04-01 02:50:37 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:50:37.836771 | orchestrator | 2026-04-01 02:50:37 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:50:40.882481 | orchestrator | 2026-04-01 02:50:40 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:50:40.883963 | orchestrator | 2026-04-01 02:50:40 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:50:40.884290 | orchestrator | 2026-04-01 02:50:40 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:50:43.934377 | orchestrator | 2026-04-01 02:50:43 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:50:43.935548 | orchestrator | 2026-04-01 02:50:43 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:50:43.935645 | orchestrator | 2026-04-01 02:50:43 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:50:46.976643 | orchestrator | 2026-04-01 02:50:46 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:50:46.977132 | orchestrator | 2026-04-01 02:50:46 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:50:46.977222 | orchestrator | 2026-04-01 02:50:46 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:50:50.022008 | orchestrator | 2026-04-01 02:50:50 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:50:50.023563 | orchestrator | 2026-04-01 02:50:50 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:50:50.023648 | orchestrator | 2026-04-01 02:50:50 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:50:53.067797 | orchestrator | 2026-04-01 02:50:53 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:50:53.069374 | orchestrator | 2026-04-01 02:50:53 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:50:53.069418 | orchestrator | 2026-04-01 02:50:53 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:50:56.116071 | orchestrator | 2026-04-01 02:50:56 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:50:56.117433 | orchestrator | 2026-04-01 02:50:56 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:50:56.117466 | orchestrator | 2026-04-01 02:50:56 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:50:59.159692 | orchestrator | 2026-04-01 02:50:59 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:50:59.160579 | orchestrator | 2026-04-01 02:50:59 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:50:59.160633 | orchestrator | 2026-04-01 02:50:59 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:51:02.208593 | orchestrator | 2026-04-01 02:51:02 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:51:02.210733 | orchestrator | 2026-04-01 02:51:02 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:51:02.210764 | orchestrator | 2026-04-01 02:51:02 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:51:05.260372 | orchestrator | 2026-04-01 02:51:05 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:51:05.262164 | orchestrator | 2026-04-01 02:51:05 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:51:05.262212 | orchestrator | 2026-04-01 02:51:05 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:51:08.305146 | orchestrator | 2026-04-01 02:51:08 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:51:08.307098 | orchestrator | 2026-04-01 02:51:08 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:51:08.307149 | orchestrator | 2026-04-01 02:51:08 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:51:11.358745 | orchestrator | 2026-04-01 02:51:11 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:51:11.361710 | orchestrator | 2026-04-01 02:51:11 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:51:11.361823 | orchestrator | 2026-04-01 02:51:11 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:51:14.415021 | orchestrator | 2026-04-01 02:51:14 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:51:14.417165 | orchestrator | 2026-04-01 02:51:14 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:51:14.417248 | orchestrator | 2026-04-01 02:51:14 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:51:17.469688 | orchestrator | 2026-04-01 02:51:17 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:51:17.471025 | orchestrator | 2026-04-01 02:51:17 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:51:17.471121 | orchestrator | 2026-04-01 02:51:17 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:51:20.528986 | orchestrator | 2026-04-01 02:51:20 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:51:20.530342 | orchestrator | 2026-04-01 02:51:20 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:51:20.530401 | orchestrator | 2026-04-01 02:51:20 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:51:23.586104 | orchestrator | 2026-04-01 02:51:23 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:51:23.587167 | orchestrator | 2026-04-01 02:51:23 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:51:23.587293 | orchestrator | 2026-04-01 02:51:23 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:51:26.632543 | orchestrator | 2026-04-01 02:51:26 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:51:26.633467 | orchestrator | 2026-04-01 02:51:26 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:51:26.633529 | orchestrator | 2026-04-01 02:51:26 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:51:29.677434 | orchestrator | 2026-04-01 02:51:29 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:51:29.678638 | orchestrator | 2026-04-01 02:51:29 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:51:29.678764 | orchestrator | 2026-04-01 02:51:29 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:51:32.729531 | orchestrator | 2026-04-01 02:51:32 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:51:32.731274 | orchestrator | 2026-04-01 02:51:32 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:51:32.731323 | orchestrator | 2026-04-01 02:51:32 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:51:35.779550 | orchestrator | 2026-04-01 02:51:35 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:51:35.780967 | orchestrator | 2026-04-01 02:51:35 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:51:35.781011 | orchestrator | 2026-04-01 02:51:35 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:51:38.825982 | orchestrator | 2026-04-01 02:51:38 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:51:38.827387 | orchestrator | 2026-04-01 02:51:38 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:51:38.827415 | orchestrator | 2026-04-01 02:51:38 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:51:41.873527 | orchestrator | 2026-04-01 02:51:41 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:51:41.874857 | orchestrator | 2026-04-01 02:51:41 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:51:41.874916 | orchestrator | 2026-04-01 02:51:41 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:51:44.922173 | orchestrator | 2026-04-01 02:51:44 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:51:44.923619 | orchestrator | 2026-04-01 02:51:44 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:51:44.923681 | orchestrator | 2026-04-01 02:51:44 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:51:47.970960 | orchestrator | 2026-04-01 02:51:47 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:51:47.973138 | orchestrator | 2026-04-01 02:51:47 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:51:47.973593 | orchestrator | 2026-04-01 02:51:47 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:51:51.025748 | orchestrator | 2026-04-01 02:51:51 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:51:51.027940 | orchestrator | 2026-04-01 02:51:51 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:51:51.028052 | orchestrator | 2026-04-01 02:51:51 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:51:54.070730 | orchestrator | 2026-04-01 02:51:54 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:51:54.072366 | orchestrator | 2026-04-01 02:51:54 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:51:54.072393 | orchestrator | 2026-04-01 02:51:54 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:51:57.124335 | orchestrator | 2026-04-01 02:51:57 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:51:57.125552 | orchestrator | 2026-04-01 02:51:57 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:51:57.125650 | orchestrator | 2026-04-01 02:51:57 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:52:00.177067 | orchestrator | 2026-04-01 02:52:00 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:52:00.178755 | orchestrator | 2026-04-01 02:52:00 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:52:00.178854 | orchestrator | 2026-04-01 02:52:00 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:52:03.225165 | orchestrator | 2026-04-01 02:52:03 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:52:03.231223 | orchestrator | 2026-04-01 02:52:03 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:52:03.231279 | orchestrator | 2026-04-01 02:52:03 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:52:06.276326 | orchestrator | 2026-04-01 02:52:06 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:52:06.277102 | orchestrator | 2026-04-01 02:52:06 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:52:06.277542 | orchestrator | 2026-04-01 02:52:06 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:52:09.327550 | orchestrator | 2026-04-01 02:52:09 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:52:09.328452 | orchestrator | 2026-04-01 02:52:09 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:52:09.328494 | orchestrator | 2026-04-01 02:52:09 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:52:12.374910 | orchestrator | 2026-04-01 02:52:12 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:52:12.376716 | orchestrator | 2026-04-01 02:52:12 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:52:12.376784 | orchestrator | 2026-04-01 02:52:12 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:52:15.422466 | orchestrator | 2026-04-01 02:52:15 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:52:15.424060 | orchestrator | 2026-04-01 02:52:15 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:52:15.424141 | orchestrator | 2026-04-01 02:52:15 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:52:18.468773 | orchestrator | 2026-04-01 02:52:18 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:52:18.470547 | orchestrator | 2026-04-01 02:52:18 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:52:18.470611 | orchestrator | 2026-04-01 02:52:18 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:52:21.522471 | orchestrator | 2026-04-01 02:52:21 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:52:21.524677 | orchestrator | 2026-04-01 02:52:21 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:52:21.525088 | orchestrator | 2026-04-01 02:52:21 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:52:24.583121 | orchestrator | 2026-04-01 02:52:24 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:52:24.584859 | orchestrator | 2026-04-01 02:52:24 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:52:24.584900 | orchestrator | 2026-04-01 02:52:24 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:52:27.634350 | orchestrator | 2026-04-01 02:52:27 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:52:27.636363 | orchestrator | 2026-04-01 02:52:27 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:52:27.636422 | orchestrator | 2026-04-01 02:52:27 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:52:30.690818 | orchestrator | 2026-04-01 02:52:30 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:52:30.690911 | orchestrator | 2026-04-01 02:52:30 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:52:30.690919 | orchestrator | 2026-04-01 02:52:30 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:52:33.745198 | orchestrator | 2026-04-01 02:52:33 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:52:33.746346 | orchestrator | 2026-04-01 02:52:33 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:52:33.746455 | orchestrator | 2026-04-01 02:52:33 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:52:36.800929 | orchestrator | 2026-04-01 02:52:36 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:52:36.801815 | orchestrator | 2026-04-01 02:52:36 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:52:36.801858 | orchestrator | 2026-04-01 02:52:36 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:52:39.852985 | orchestrator | 2026-04-01 02:52:39 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:52:39.855359 | orchestrator | 2026-04-01 02:52:39 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:52:39.855404 | orchestrator | 2026-04-01 02:52:39 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:52:42.906897 | orchestrator | 2026-04-01 02:52:42 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:52:42.908007 | orchestrator | 2026-04-01 02:52:42 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:52:42.908151 | orchestrator | 2026-04-01 02:52:42 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:52:45.960558 | orchestrator | 2026-04-01 02:52:45 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:52:45.961337 | orchestrator | 2026-04-01 02:52:45 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:52:45.961461 | orchestrator | 2026-04-01 02:52:45 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:52:49.011251 | orchestrator | 2026-04-01 02:52:49 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:52:49.015909 | orchestrator | 2026-04-01 02:52:49 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:52:49.016063 | orchestrator | 2026-04-01 02:52:49 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:52:52.064379 | orchestrator | 2026-04-01 02:52:52 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:52:52.066442 | orchestrator | 2026-04-01 02:52:52 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:52:52.066510 | orchestrator | 2026-04-01 02:52:52 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:52:55.110965 | orchestrator | 2026-04-01 02:52:55 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:52:55.112094 | orchestrator | 2026-04-01 02:52:55 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:52:55.112156 | orchestrator | 2026-04-01 02:52:55 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:52:58.163504 | orchestrator | 2026-04-01 02:52:58 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:52:58.165215 | orchestrator | 2026-04-01 02:52:58 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:52:58.165276 | orchestrator | 2026-04-01 02:52:58 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:53:01.216990 | orchestrator | 2026-04-01 02:53:01 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:53:01.218251 | orchestrator | 2026-04-01 02:53:01 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:53:01.218320 | orchestrator | 2026-04-01 02:53:01 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:53:04.266459 | orchestrator | 2026-04-01 02:53:04 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:53:04.268426 | orchestrator | 2026-04-01 02:53:04 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:53:04.268472 | orchestrator | 2026-04-01 02:53:04 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:53:07.322442 | orchestrator | 2026-04-01 02:53:07 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:53:07.324048 | orchestrator | 2026-04-01 02:53:07 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:53:07.324218 | orchestrator | 2026-04-01 02:53:07 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:53:10.371339 | orchestrator | 2026-04-01 02:53:10 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:53:10.372945 | orchestrator | 2026-04-01 02:53:10 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:53:10.373016 | orchestrator | 2026-04-01 02:53:10 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:53:13.425668 | orchestrator | 2026-04-01 02:53:13 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:53:13.426669 | orchestrator | 2026-04-01 02:53:13 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:53:13.426706 | orchestrator | 2026-04-01 02:53:13 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:53:16.473691 | orchestrator | 2026-04-01 02:53:16 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:53:16.475261 | orchestrator | 2026-04-01 02:53:16 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:53:16.475301 | orchestrator | 2026-04-01 02:53:16 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:53:19.521103 | orchestrator | 2026-04-01 02:53:19 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:53:19.522678 | orchestrator | 2026-04-01 02:53:19 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:53:19.522741 | orchestrator | 2026-04-01 02:53:19 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:53:22.571292 | orchestrator | 2026-04-01 02:53:22 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:53:22.572733 | orchestrator | 2026-04-01 02:53:22 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:53:22.572805 | orchestrator | 2026-04-01 02:53:22 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:53:25.623584 | orchestrator | 2026-04-01 02:53:25 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:53:25.626189 | orchestrator | 2026-04-01 02:53:25 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:53:25.626264 | orchestrator | 2026-04-01 02:53:25 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:53:28.675148 | orchestrator | 2026-04-01 02:53:28 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:53:28.677740 | orchestrator | 2026-04-01 02:53:28 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:53:28.677806 | orchestrator | 2026-04-01 02:53:28 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:53:31.720967 | orchestrator | 2026-04-01 02:53:31 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:53:31.721782 | orchestrator | 2026-04-01 02:53:31 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:53:31.721815 | orchestrator | 2026-04-01 02:53:31 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:53:34.767617 | orchestrator | 2026-04-01 02:53:34 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:53:34.768040 | orchestrator | 2026-04-01 02:53:34 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:53:34.768067 | orchestrator | 2026-04-01 02:53:34 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:53:37.820055 | orchestrator | 2026-04-01 02:53:37 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:53:37.821830 | orchestrator | 2026-04-01 02:53:37 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:53:37.821965 | orchestrator | 2026-04-01 02:53:37 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:53:40.868457 | orchestrator | 2026-04-01 02:53:40 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:53:40.870308 | orchestrator | 2026-04-01 02:53:40 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:53:40.870369 | orchestrator | 2026-04-01 02:53:40 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:53:43.926154 | orchestrator | 2026-04-01 02:53:43 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:53:43.927655 | orchestrator | 2026-04-01 02:53:43 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:53:43.927715 | orchestrator | 2026-04-01 02:53:43 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:53:46.970140 | orchestrator | 2026-04-01 02:53:46 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:53:46.972215 | orchestrator | 2026-04-01 02:53:46 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:53:46.972319 | orchestrator | 2026-04-01 02:53:46 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:53:50.023299 | orchestrator | 2026-04-01 02:53:50 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:53:50.026402 | orchestrator | 2026-04-01 02:53:50 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:53:50.026472 | orchestrator | 2026-04-01 02:53:50 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:53:53.070189 | orchestrator | 2026-04-01 02:53:53 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:53:53.071949 | orchestrator | 2026-04-01 02:53:53 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:53:53.072004 | orchestrator | 2026-04-01 02:53:53 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:53:56.117819 | orchestrator | 2026-04-01 02:53:56 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:53:56.119475 | orchestrator | 2026-04-01 02:53:56 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:53:56.119686 | orchestrator | 2026-04-01 02:53:56 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:53:59.159981 | orchestrator | 2026-04-01 02:53:59 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:53:59.162047 | orchestrator | 2026-04-01 02:53:59 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:53:59.162094 | orchestrator | 2026-04-01 02:53:59 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:54:02.215701 | orchestrator | 2026-04-01 02:54:02 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:54:02.217714 | orchestrator | 2026-04-01 02:54:02 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:54:02.217757 | orchestrator | 2026-04-01 02:54:02 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:54:05.272505 | orchestrator | 2026-04-01 02:54:05 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:54:05.275420 | orchestrator | 2026-04-01 02:54:05 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:54:05.275520 | orchestrator | 2026-04-01 02:54:05 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:54:08.326134 | orchestrator | 2026-04-01 02:54:08 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:54:08.326409 | orchestrator | 2026-04-01 02:54:08 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:54:08.326808 | orchestrator | 2026-04-01 02:54:08 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:54:11.375501 | orchestrator | 2026-04-01 02:54:11 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:54:11.377322 | orchestrator | 2026-04-01 02:54:11 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:54:11.377379 | orchestrator | 2026-04-01 02:54:11 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:54:14.430080 | orchestrator | 2026-04-01 02:54:14 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:54:14.434339 | orchestrator | 2026-04-01 02:54:14 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:54:14.434412 | orchestrator | 2026-04-01 02:54:14 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:54:17.484597 | orchestrator | 2026-04-01 02:54:17 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:54:17.486954 | orchestrator | 2026-04-01 02:54:17 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:54:17.487017 | orchestrator | 2026-04-01 02:54:17 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:54:20.532048 | orchestrator | 2026-04-01 02:54:20 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:54:20.532542 | orchestrator | 2026-04-01 02:54:20 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:54:20.532565 | orchestrator | 2026-04-01 02:54:20 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:54:23.580223 | orchestrator | 2026-04-01 02:54:23 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:54:23.582192 | orchestrator | 2026-04-01 02:54:23 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:54:23.582313 | orchestrator | 2026-04-01 02:54:23 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:54:26.624188 | orchestrator | 2026-04-01 02:54:26 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:54:26.625508 | orchestrator | 2026-04-01 02:54:26 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:54:26.625562 | orchestrator | 2026-04-01 02:54:26 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:54:29.670187 | orchestrator | 2026-04-01 02:54:29 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:54:29.671368 | orchestrator | 2026-04-01 02:54:29 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:54:29.671399 | orchestrator | 2026-04-01 02:54:29 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:54:32.722202 | orchestrator | 2026-04-01 02:54:32 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:54:32.724221 | orchestrator | 2026-04-01 02:54:32 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:54:32.724282 | orchestrator | 2026-04-01 02:54:32 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:54:35.775255 | orchestrator | 2026-04-01 02:54:35 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:54:35.777592 | orchestrator | 2026-04-01 02:54:35 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:54:35.777786 | orchestrator | 2026-04-01 02:54:35 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:54:38.825215 | orchestrator | 2026-04-01 02:54:38 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:54:38.826673 | orchestrator | 2026-04-01 02:54:38 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:54:38.826691 | orchestrator | 2026-04-01 02:54:38 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:56:41.986615 | orchestrator | 2026-04-01 02:56:41 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:56:41.986739 | orchestrator | 2026-04-01 02:56:41 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:56:41.986754 | orchestrator | 2026-04-01 02:56:41 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:56:45.025141 | orchestrator | 2026-04-01 02:56:45 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:56:45.027045 | orchestrator | 2026-04-01 02:56:45 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:56:45.027133 | orchestrator | 2026-04-01 02:56:45 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:56:48.076469 | orchestrator | 2026-04-01 02:56:48 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:56:48.078795 | orchestrator | 2026-04-01 02:56:48 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:56:48.078865 | orchestrator | 2026-04-01 02:56:48 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:56:51.118272 | orchestrator | 2026-04-01 02:56:51 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:56:51.119701 | orchestrator | 2026-04-01 02:56:51 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:56:51.119799 | orchestrator | 2026-04-01 02:56:51 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:56:54.167257 | orchestrator | 2026-04-01 02:56:54 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:56:54.169866 | orchestrator | 2026-04-01 02:56:54 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:56:54.170097 | orchestrator | 2026-04-01 02:56:54 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:56:57.216596 | orchestrator | 2026-04-01 02:56:57 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:56:57.218263 | orchestrator | 2026-04-01 02:56:57 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:56:57.218322 | orchestrator | 2026-04-01 02:56:57 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:57:00.266546 | orchestrator | 2026-04-01 02:57:00 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:57:00.268384 | orchestrator | 2026-04-01 02:57:00 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:57:00.268437 | orchestrator | 2026-04-01 02:57:00 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:57:03.321110 | orchestrator | 2026-04-01 02:57:03 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:57:03.323888 | orchestrator | 2026-04-01 02:57:03 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:57:03.323935 | orchestrator | 2026-04-01 02:57:03 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:57:06.362495 | orchestrator | 2026-04-01 02:57:06 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:57:06.364228 | orchestrator | 2026-04-01 02:57:06 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:57:06.364263 | orchestrator | 2026-04-01 02:57:06 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:57:09.410548 | orchestrator | 2026-04-01 02:57:09 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:57:09.412818 | orchestrator | 2026-04-01 02:57:09 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:57:09.413105 | orchestrator | 2026-04-01 02:57:09 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:57:12.461099 | orchestrator | 2026-04-01 02:57:12 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:57:12.463165 | orchestrator | 2026-04-01 02:57:12 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:57:12.463224 | orchestrator | 2026-04-01 02:57:12 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:57:15.508338 | orchestrator | 2026-04-01 02:57:15 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:57:15.510368 | orchestrator | 2026-04-01 02:57:15 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:57:15.510772 | orchestrator | 2026-04-01 02:57:15 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:57:18.555940 | orchestrator | 2026-04-01 02:57:18 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:57:18.558394 | orchestrator | 2026-04-01 02:57:18 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:57:18.558601 | orchestrator | 2026-04-01 02:57:18 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:57:21.603891 | orchestrator | 2026-04-01 02:57:21 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:57:21.605934 | orchestrator | 2026-04-01 02:57:21 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:57:21.606004 | orchestrator | 2026-04-01 02:57:21 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:57:24.646231 | orchestrator | 2026-04-01 02:57:24 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:57:24.648397 | orchestrator | 2026-04-01 02:57:24 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:57:24.648446 | orchestrator | 2026-04-01 02:57:24 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:57:27.698269 | orchestrator | 2026-04-01 02:57:27 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:57:27.699185 | orchestrator | 2026-04-01 02:57:27 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:57:27.699365 | orchestrator | 2026-04-01 02:57:27 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:57:30.742868 | orchestrator | 2026-04-01 02:57:30 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:57:30.745336 | orchestrator | 2026-04-01 02:57:30 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:57:30.745427 | orchestrator | 2026-04-01 02:57:30 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:57:33.793286 | orchestrator | 2026-04-01 02:57:33 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:57:33.794861 | orchestrator | 2026-04-01 02:57:33 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:57:33.794920 | orchestrator | 2026-04-01 02:57:33 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:57:36.842798 | orchestrator | 2026-04-01 02:57:36 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:57:36.845418 | orchestrator | 2026-04-01 02:57:36 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:57:36.845480 | orchestrator | 2026-04-01 02:57:36 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:57:39.891746 | orchestrator | 2026-04-01 02:57:39 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:57:39.893336 | orchestrator | 2026-04-01 02:57:39 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:57:39.893418 | orchestrator | 2026-04-01 02:57:39 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:57:42.942501 | orchestrator | 2026-04-01 02:57:42 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:57:42.944941 | orchestrator | 2026-04-01 02:57:42 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:57:42.945074 | orchestrator | 2026-04-01 02:57:42 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:57:45.998708 | orchestrator | 2026-04-01 02:57:45 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:57:46.000243 | orchestrator | 2026-04-01 02:57:45 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:57:46.000267 | orchestrator | 2026-04-01 02:57:45 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:57:49.041382 | orchestrator | 2026-04-01 02:57:49 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:57:49.043113 | orchestrator | 2026-04-01 02:57:49 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:57:49.043336 | orchestrator | 2026-04-01 02:57:49 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:57:52.086488 | orchestrator | 2026-04-01 02:57:52 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:57:52.088386 | orchestrator | 2026-04-01 02:57:52 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:57:52.088538 | orchestrator | 2026-04-01 02:57:52 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:57:55.135462 | orchestrator | 2026-04-01 02:57:55 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:57:55.137742 | orchestrator | 2026-04-01 02:57:55 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:57:55.137824 | orchestrator | 2026-04-01 02:57:55 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:57:58.187504 | orchestrator | 2026-04-01 02:57:58 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:57:58.189812 | orchestrator | 2026-04-01 02:57:58 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:57:58.189956 | orchestrator | 2026-04-01 02:57:58 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:58:01.234844 | orchestrator | 2026-04-01 02:58:01 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:58:01.236687 | orchestrator | 2026-04-01 02:58:01 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:58:01.236780 | orchestrator | 2026-04-01 02:58:01 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:58:04.284324 | orchestrator | 2026-04-01 02:58:04 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:58:04.286452 | orchestrator | 2026-04-01 02:58:04 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:58:04.286646 | orchestrator | 2026-04-01 02:58:04 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:58:07.331548 | orchestrator | 2026-04-01 02:58:07 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:58:07.332854 | orchestrator | 2026-04-01 02:58:07 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:58:07.332907 | orchestrator | 2026-04-01 02:58:07 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:58:10.380112 | orchestrator | 2026-04-01 02:58:10 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:58:10.381981 | orchestrator | 2026-04-01 02:58:10 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:58:10.382061 | orchestrator | 2026-04-01 02:58:10 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:58:13.429688 | orchestrator | 2026-04-01 02:58:13 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:58:13.431528 | orchestrator | 2026-04-01 02:58:13 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:58:13.431629 | orchestrator | 2026-04-01 02:58:13 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:58:16.480852 | orchestrator | 2026-04-01 02:58:16 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:58:16.482512 | orchestrator | 2026-04-01 02:58:16 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:58:16.482605 | orchestrator | 2026-04-01 02:58:16 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:58:19.525844 | orchestrator | 2026-04-01 02:58:19 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:58:19.527999 | orchestrator | 2026-04-01 02:58:19 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:58:19.528037 | orchestrator | 2026-04-01 02:58:19 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:58:22.567666 | orchestrator | 2026-04-01 02:58:22 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:58:22.568860 | orchestrator | 2026-04-01 02:58:22 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:58:22.568914 | orchestrator | 2026-04-01 02:58:22 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:58:25.624505 | orchestrator | 2026-04-01 02:58:25 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:58:25.627452 | orchestrator | 2026-04-01 02:58:25 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:58:25.627514 | orchestrator | 2026-04-01 02:58:25 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:58:28.678866 | orchestrator | 2026-04-01 02:58:28 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:58:28.681990 | orchestrator | 2026-04-01 02:58:28 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:58:28.682218 | orchestrator | 2026-04-01 02:58:28 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:58:31.728032 | orchestrator | 2026-04-01 02:58:31 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:58:31.729891 | orchestrator | 2026-04-01 02:58:31 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:58:31.729945 | orchestrator | 2026-04-01 02:58:31 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:58:34.773934 | orchestrator | 2026-04-01 02:58:34 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:58:34.776434 | orchestrator | 2026-04-01 02:58:34 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:58:34.776489 | orchestrator | 2026-04-01 02:58:34 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:58:37.818367 | orchestrator | 2026-04-01 02:58:37 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:58:37.820014 | orchestrator | 2026-04-01 02:58:37 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:58:37.820095 | orchestrator | 2026-04-01 02:58:37 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:58:40.865099 | orchestrator | 2026-04-01 02:58:40 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:58:40.867398 | orchestrator | 2026-04-01 02:58:40 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:58:40.867469 | orchestrator | 2026-04-01 02:58:40 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:58:43.913140 | orchestrator | 2026-04-01 02:58:43 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:58:43.914443 | orchestrator | 2026-04-01 02:58:43 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:58:43.914547 | orchestrator | 2026-04-01 02:58:43 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:58:46.965861 | orchestrator | 2026-04-01 02:58:46 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:58:46.968534 | orchestrator | 2026-04-01 02:58:46 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:58:46.968632 | orchestrator | 2026-04-01 02:58:46 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:58:50.013904 | orchestrator | 2026-04-01 02:58:50 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:58:50.016155 | orchestrator | 2026-04-01 02:58:50 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:58:50.016275 | orchestrator | 2026-04-01 02:58:50 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:58:53.060361 | orchestrator | 2026-04-01 02:58:53 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:58:53.062375 | orchestrator | 2026-04-01 02:58:53 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:58:53.062416 | orchestrator | 2026-04-01 02:58:53 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:58:56.105516 | orchestrator | 2026-04-01 02:58:56 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:58:56.107062 | orchestrator | 2026-04-01 02:58:56 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:58:56.107592 | orchestrator | 2026-04-01 02:58:56 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:58:59.147115 | orchestrator | 2026-04-01 02:58:59 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:58:59.148638 | orchestrator | 2026-04-01 02:58:59 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:58:59.148965 | orchestrator | 2026-04-01 02:58:59 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:59:02.199122 | orchestrator | 2026-04-01 02:59:02 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:59:02.202127 | orchestrator | 2026-04-01 02:59:02 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:59:02.202222 | orchestrator | 2026-04-01 02:59:02 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:59:05.251526 | orchestrator | 2026-04-01 02:59:05 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:59:05.253675 | orchestrator | 2026-04-01 02:59:05 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:59:05.253723 | orchestrator | 2026-04-01 02:59:05 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:59:08.298320 | orchestrator | 2026-04-01 02:59:08 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:59:08.300621 | orchestrator | 2026-04-01 02:59:08 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:59:08.300795 | orchestrator | 2026-04-01 02:59:08 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:59:11.346838 | orchestrator | 2026-04-01 02:59:11 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:59:11.348676 | orchestrator | 2026-04-01 02:59:11 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:59:11.349028 | orchestrator | 2026-04-01 02:59:11 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:59:14.396474 | orchestrator | 2026-04-01 02:59:14 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:59:14.398184 | orchestrator | 2026-04-01 02:59:14 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:59:14.398282 | orchestrator | 2026-04-01 02:59:14 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:59:17.445399 | orchestrator | 2026-04-01 02:59:17 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:59:17.446475 | orchestrator | 2026-04-01 02:59:17 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:59:17.446511 | orchestrator | 2026-04-01 02:59:17 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:59:20.496824 | orchestrator | 2026-04-01 02:59:20 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:59:20.497929 | orchestrator | 2026-04-01 02:59:20 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:59:20.498092 | orchestrator | 2026-04-01 02:59:20 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:59:23.535872 | orchestrator | 2026-04-01 02:59:23 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:59:23.537639 | orchestrator | 2026-04-01 02:59:23 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:59:23.538802 | orchestrator | 2026-04-01 02:59:23 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:59:26.578879 | orchestrator | 2026-04-01 02:59:26 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:59:26.580695 | orchestrator | 2026-04-01 02:59:26 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:59:26.580840 | orchestrator | 2026-04-01 02:59:26 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:59:29.627143 | orchestrator | 2026-04-01 02:59:29 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:59:29.628918 | orchestrator | 2026-04-01 02:59:29 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:59:29.628977 | orchestrator | 2026-04-01 02:59:29 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:59:32.669039 | orchestrator | 2026-04-01 02:59:32 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:59:32.670550 | orchestrator | 2026-04-01 02:59:32 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:59:32.670625 | orchestrator | 2026-04-01 02:59:32 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:59:35.709694 | orchestrator | 2026-04-01 02:59:35 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:59:35.711475 | orchestrator | 2026-04-01 02:59:35 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:59:35.711512 | orchestrator | 2026-04-01 02:59:35 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:59:38.750245 | orchestrator | 2026-04-01 02:59:38 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:59:38.752008 | orchestrator | 2026-04-01 02:59:38 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:59:38.752096 | orchestrator | 2026-04-01 02:59:38 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:59:41.800574 | orchestrator | 2026-04-01 02:59:41 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:59:41.803115 | orchestrator | 2026-04-01 02:59:41 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:59:41.803237 | orchestrator | 2026-04-01 02:59:41 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:59:44.851822 | orchestrator | 2026-04-01 02:59:44 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:59:44.853901 | orchestrator | 2026-04-01 02:59:44 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:59:44.854790 | orchestrator | 2026-04-01 02:59:44 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:59:47.899500 | orchestrator | 2026-04-01 02:59:47 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:59:47.901478 | orchestrator | 2026-04-01 02:59:47 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:59:47.901542 | orchestrator | 2026-04-01 02:59:47 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:59:50.948099 | orchestrator | 2026-04-01 02:59:50 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:59:50.950350 | orchestrator | 2026-04-01 02:59:50 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:59:50.950463 | orchestrator | 2026-04-01 02:59:50 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:59:53.988716 | orchestrator | 2026-04-01 02:59:53 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:59:53.990545 | orchestrator | 2026-04-01 02:59:53 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:59:53.990607 | orchestrator | 2026-04-01 02:59:53 | INFO  | Wait 1 second(s) until the next check 2026-04-01 02:59:57.028324 | orchestrator | 2026-04-01 02:59:57 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 02:59:57.030566 | orchestrator | 2026-04-01 02:59:57 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 02:59:57.030788 | orchestrator | 2026-04-01 02:59:57 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:00:00.071698 | orchestrator | 2026-04-01 03:00:00 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:00:00.073436 | orchestrator | 2026-04-01 03:00:00 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:00:00.073544 | orchestrator | 2026-04-01 03:00:00 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:00:03.114444 | orchestrator | 2026-04-01 03:00:03 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:00:03.116124 | orchestrator | 2026-04-01 03:00:03 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:00:03.116181 | orchestrator | 2026-04-01 03:00:03 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:00:06.153686 | orchestrator | 2026-04-01 03:00:06 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:00:06.154403 | orchestrator | 2026-04-01 03:00:06 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:00:06.154489 | orchestrator | 2026-04-01 03:00:06 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:00:09.194263 | orchestrator | 2026-04-01 03:00:09 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:00:09.195102 | orchestrator | 2026-04-01 03:00:09 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:00:09.195147 | orchestrator | 2026-04-01 03:00:09 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:00:12.232953 | orchestrator | 2026-04-01 03:00:12 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:00:12.236925 | orchestrator | 2026-04-01 03:00:12 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:00:12.237002 | orchestrator | 2026-04-01 03:00:12 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:00:15.286281 | orchestrator | 2026-04-01 03:00:15 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:00:15.288252 | orchestrator | 2026-04-01 03:00:15 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:00:15.288411 | orchestrator | 2026-04-01 03:00:15 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:00:18.334675 | orchestrator | 2026-04-01 03:00:18 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:00:18.337367 | orchestrator | 2026-04-01 03:00:18 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:00:18.337416 | orchestrator | 2026-04-01 03:00:18 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:00:21.384128 | orchestrator | 2026-04-01 03:00:21 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:00:21.385986 | orchestrator | 2026-04-01 03:00:21 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:00:21.386061 | orchestrator | 2026-04-01 03:00:21 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:00:24.428519 | orchestrator | 2026-04-01 03:00:24 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:00:24.428653 | orchestrator | 2026-04-01 03:00:24 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:00:24.428666 | orchestrator | 2026-04-01 03:00:24 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:00:27.471662 | orchestrator | 2026-04-01 03:00:27 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:00:27.473031 | orchestrator | 2026-04-01 03:00:27 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:00:27.473158 | orchestrator | 2026-04-01 03:00:27 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:00:30.518092 | orchestrator | 2026-04-01 03:00:30 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:00:30.519478 | orchestrator | 2026-04-01 03:00:30 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:00:30.519573 | orchestrator | 2026-04-01 03:00:30 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:00:33.561832 | orchestrator | 2026-04-01 03:00:33 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:00:33.562785 | orchestrator | 2026-04-01 03:00:33 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:00:33.562888 | orchestrator | 2026-04-01 03:00:33 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:00:36.604519 | orchestrator | 2026-04-01 03:00:36 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:00:36.605055 | orchestrator | 2026-04-01 03:00:36 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:00:36.605078 | orchestrator | 2026-04-01 03:00:36 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:00:39.665588 | orchestrator | 2026-04-01 03:00:39 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:00:39.667582 | orchestrator | 2026-04-01 03:00:39 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:00:39.667666 | orchestrator | 2026-04-01 03:00:39 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:00:42.708420 | orchestrator | 2026-04-01 03:00:42 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:00:42.709247 | orchestrator | 2026-04-01 03:00:42 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:00:42.709662 | orchestrator | 2026-04-01 03:00:42 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:00:45.756644 | orchestrator | 2026-04-01 03:00:45 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:00:45.757228 | orchestrator | 2026-04-01 03:00:45 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:00:45.757304 | orchestrator | 2026-04-01 03:00:45 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:00:48.801018 | orchestrator | 2026-04-01 03:00:48 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:00:48.801593 | orchestrator | 2026-04-01 03:00:48 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:00:48.801712 | orchestrator | 2026-04-01 03:00:48 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:00:51.849375 | orchestrator | 2026-04-01 03:00:51 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:00:51.851735 | orchestrator | 2026-04-01 03:00:51 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:00:51.851787 | orchestrator | 2026-04-01 03:00:51 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:00:54.900310 | orchestrator | 2026-04-01 03:00:54 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:00:54.902476 | orchestrator | 2026-04-01 03:00:54 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:00:54.902677 | orchestrator | 2026-04-01 03:00:54 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:00:57.947018 | orchestrator | 2026-04-01 03:00:57 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:00:57.947998 | orchestrator | 2026-04-01 03:00:57 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:00:57.948028 | orchestrator | 2026-04-01 03:00:57 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:01:00.984214 | orchestrator | 2026-04-01 03:01:00 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:01:00.986069 | orchestrator | 2026-04-01 03:01:00 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:01:00.986125 | orchestrator | 2026-04-01 03:01:00 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:01:04.031666 | orchestrator | 2026-04-01 03:01:04 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:01:04.033283 | orchestrator | 2026-04-01 03:01:04 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:01:04.033328 | orchestrator | 2026-04-01 03:01:04 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:01:07.078390 | orchestrator | 2026-04-01 03:01:07 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:01:07.080378 | orchestrator | 2026-04-01 03:01:07 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:01:07.080442 | orchestrator | 2026-04-01 03:01:07 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:01:10.127499 | orchestrator | 2026-04-01 03:01:10 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:01:10.130195 | orchestrator | 2026-04-01 03:01:10 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:01:10.130294 | orchestrator | 2026-04-01 03:01:10 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:01:13.175769 | orchestrator | 2026-04-01 03:01:13 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:01:13.176933 | orchestrator | 2026-04-01 03:01:13 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:01:13.176977 | orchestrator | 2026-04-01 03:01:13 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:01:16.221191 | orchestrator | 2026-04-01 03:01:16 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:01:16.222141 | orchestrator | 2026-04-01 03:01:16 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:01:16.222180 | orchestrator | 2026-04-01 03:01:16 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:01:19.265654 | orchestrator | 2026-04-01 03:01:19 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:01:19.266286 | orchestrator | 2026-04-01 03:01:19 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:01:19.266351 | orchestrator | 2026-04-01 03:01:19 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:01:22.306185 | orchestrator | 2026-04-01 03:01:22 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:01:22.307244 | orchestrator | 2026-04-01 03:01:22 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:01:22.307296 | orchestrator | 2026-04-01 03:01:22 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:01:25.350879 | orchestrator | 2026-04-01 03:01:25 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:01:25.352261 | orchestrator | 2026-04-01 03:01:25 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:01:25.352304 | orchestrator | 2026-04-01 03:01:25 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:01:28.395094 | orchestrator | 2026-04-01 03:01:28 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:01:28.395692 | orchestrator | 2026-04-01 03:01:28 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:01:28.398126 | orchestrator | 2026-04-01 03:01:28 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:01:31.436907 | orchestrator | 2026-04-01 03:01:31 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:01:31.438580 | orchestrator | 2026-04-01 03:01:31 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:01:31.438659 | orchestrator | 2026-04-01 03:01:31 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:01:34.482105 | orchestrator | 2026-04-01 03:01:34 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:01:34.483675 | orchestrator | 2026-04-01 03:01:34 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:01:34.483719 | orchestrator | 2026-04-01 03:01:34 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:01:37.533076 | orchestrator | 2026-04-01 03:01:37 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:01:37.534404 | orchestrator | 2026-04-01 03:01:37 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:01:37.534463 | orchestrator | 2026-04-01 03:01:37 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:01:40.582765 | orchestrator | 2026-04-01 03:01:40 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:01:40.583831 | orchestrator | 2026-04-01 03:01:40 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:01:40.583902 | orchestrator | 2026-04-01 03:01:40 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:01:43.630769 | orchestrator | 2026-04-01 03:01:43 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:01:43.631844 | orchestrator | 2026-04-01 03:01:43 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:01:43.632075 | orchestrator | 2026-04-01 03:01:43 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:01:46.680779 | orchestrator | 2026-04-01 03:01:46 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:01:46.680871 | orchestrator | 2026-04-01 03:01:46 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:01:46.680932 | orchestrator | 2026-04-01 03:01:46 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:01:49.718387 | orchestrator | 2026-04-01 03:01:49 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:01:49.721275 | orchestrator | 2026-04-01 03:01:49 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:01:49.721340 | orchestrator | 2026-04-01 03:01:49 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:01:52.767773 | orchestrator | 2026-04-01 03:01:52 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:01:52.769462 | orchestrator | 2026-04-01 03:01:52 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:01:52.769608 | orchestrator | 2026-04-01 03:01:52 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:01:55.809256 | orchestrator | 2026-04-01 03:01:55 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:01:55.810463 | orchestrator | 2026-04-01 03:01:55 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:01:55.810788 | orchestrator | 2026-04-01 03:01:55 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:01:58.860844 | orchestrator | 2026-04-01 03:01:58 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:01:58.862432 | orchestrator | 2026-04-01 03:01:58 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:01:58.862501 | orchestrator | 2026-04-01 03:01:58 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:02:01.913552 | orchestrator | 2026-04-01 03:02:01 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:02:01.915065 | orchestrator | 2026-04-01 03:02:01 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:02:01.915264 | orchestrator | 2026-04-01 03:02:01 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:02:04.958523 | orchestrator | 2026-04-01 03:02:04 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:02:04.959723 | orchestrator | 2026-04-01 03:02:04 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:02:04.959763 | orchestrator | 2026-04-01 03:02:04 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:02:08.006499 | orchestrator | 2026-04-01 03:02:08 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:02:08.008439 | orchestrator | 2026-04-01 03:02:08 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:02:08.008776 | orchestrator | 2026-04-01 03:02:08 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:02:11.052876 | orchestrator | 2026-04-01 03:02:11 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:02:11.055345 | orchestrator | 2026-04-01 03:02:11 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:02:11.055401 | orchestrator | 2026-04-01 03:02:11 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:02:14.103736 | orchestrator | 2026-04-01 03:02:14 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:02:14.105255 | orchestrator | 2026-04-01 03:02:14 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:02:14.105334 | orchestrator | 2026-04-01 03:02:14 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:02:17.158253 | orchestrator | 2026-04-01 03:02:17 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:02:17.158327 | orchestrator | 2026-04-01 03:02:17 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:02:17.158334 | orchestrator | 2026-04-01 03:02:17 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:02:20.202449 | orchestrator | 2026-04-01 03:02:20 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:02:20.203660 | orchestrator | 2026-04-01 03:02:20 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:02:20.203695 | orchestrator | 2026-04-01 03:02:20 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:02:23.245211 | orchestrator | 2026-04-01 03:02:23 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:02:23.246785 | orchestrator | 2026-04-01 03:02:23 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:02:23.246953 | orchestrator | 2026-04-01 03:02:23 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:02:26.290740 | orchestrator | 2026-04-01 03:02:26 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:02:26.293236 | orchestrator | 2026-04-01 03:02:26 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:02:26.293293 | orchestrator | 2026-04-01 03:02:26 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:02:29.340846 | orchestrator | 2026-04-01 03:02:29 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:02:29.341321 | orchestrator | 2026-04-01 03:02:29 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:02:29.341362 | orchestrator | 2026-04-01 03:02:29 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:02:32.390544 | orchestrator | 2026-04-01 03:02:32 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:02:32.390625 | orchestrator | 2026-04-01 03:02:32 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:02:32.390636 | orchestrator | 2026-04-01 03:02:32 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:02:35.442265 | orchestrator | 2026-04-01 03:02:35 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:02:35.443520 | orchestrator | 2026-04-01 03:02:35 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:02:35.443709 | orchestrator | 2026-04-01 03:02:35 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:02:38.492193 | orchestrator | 2026-04-01 03:02:38 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:02:38.494320 | orchestrator | 2026-04-01 03:02:38 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:02:38.494400 | orchestrator | 2026-04-01 03:02:38 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:02:41.548390 | orchestrator | 2026-04-01 03:02:41 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:02:41.551396 | orchestrator | 2026-04-01 03:02:41 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:02:41.551865 | orchestrator | 2026-04-01 03:02:41 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:02:44.606287 | orchestrator | 2026-04-01 03:02:44 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:02:44.608388 | orchestrator | 2026-04-01 03:02:44 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:02:44.608470 | orchestrator | 2026-04-01 03:02:44 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:02:47.661690 | orchestrator | 2026-04-01 03:02:47 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:02:47.662463 | orchestrator | 2026-04-01 03:02:47 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:02:47.662501 | orchestrator | 2026-04-01 03:02:47 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:02:50.713749 | orchestrator | 2026-04-01 03:02:50 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:02:50.715675 | orchestrator | 2026-04-01 03:02:50 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:02:50.715755 | orchestrator | 2026-04-01 03:02:50 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:02:53.767391 | orchestrator | 2026-04-01 03:02:53 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:02:53.767963 | orchestrator | 2026-04-01 03:02:53 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:02:53.768001 | orchestrator | 2026-04-01 03:02:53 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:02:56.821282 | orchestrator | 2026-04-01 03:02:56 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:02:56.823321 | orchestrator | 2026-04-01 03:02:56 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:02:56.823412 | orchestrator | 2026-04-01 03:02:56 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:02:59.870883 | orchestrator | 2026-04-01 03:02:59 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:02:59.871468 | orchestrator | 2026-04-01 03:02:59 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:02:59.871560 | orchestrator | 2026-04-01 03:02:59 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:03:02.925193 | orchestrator | 2026-04-01 03:03:02 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:03:02.925276 | orchestrator | 2026-04-01 03:03:02 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:03:02.925286 | orchestrator | 2026-04-01 03:03:02 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:03:05.966751 | orchestrator | 2026-04-01 03:03:05 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:03:05.968909 | orchestrator | 2026-04-01 03:03:05 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:03:05.969016 | orchestrator | 2026-04-01 03:03:05 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:03:09.017905 | orchestrator | 2026-04-01 03:03:09 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:03:09.019214 | orchestrator | 2026-04-01 03:03:09 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:03:09.019581 | orchestrator | 2026-04-01 03:03:09 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:03:12.068088 | orchestrator | 2026-04-01 03:03:12 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:03:12.068199 | orchestrator | 2026-04-01 03:03:12 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:03:12.068235 | orchestrator | 2026-04-01 03:03:12 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:03:15.112993 | orchestrator | 2026-04-01 03:03:15 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:03:15.114437 | orchestrator | 2026-04-01 03:03:15 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:03:15.114481 | orchestrator | 2026-04-01 03:03:15 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:03:18.167228 | orchestrator | 2026-04-01 03:03:18 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:03:18.168281 | orchestrator | 2026-04-01 03:03:18 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:03:18.168368 | orchestrator | 2026-04-01 03:03:18 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:03:21.219526 | orchestrator | 2026-04-01 03:03:21 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:03:21.221432 | orchestrator | 2026-04-01 03:03:21 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:03:21.221472 | orchestrator | 2026-04-01 03:03:21 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:03:24.272177 | orchestrator | 2026-04-01 03:03:24 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:03:24.274347 | orchestrator | 2026-04-01 03:03:24 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:03:24.274418 | orchestrator | 2026-04-01 03:03:24 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:03:27.326483 | orchestrator | 2026-04-01 03:03:27 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:03:27.328063 | orchestrator | 2026-04-01 03:03:27 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:03:27.328155 | orchestrator | 2026-04-01 03:03:27 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:03:30.378890 | orchestrator | 2026-04-01 03:03:30 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:03:30.379237 | orchestrator | 2026-04-01 03:03:30 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:03:30.379264 | orchestrator | 2026-04-01 03:03:30 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:03:33.430281 | orchestrator | 2026-04-01 03:03:33 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:03:33.431535 | orchestrator | 2026-04-01 03:03:33 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:03:33.431601 | orchestrator | 2026-04-01 03:03:33 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:03:36.480127 | orchestrator | 2026-04-01 03:03:36 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:03:36.481487 | orchestrator | 2026-04-01 03:03:36 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:03:36.481539 | orchestrator | 2026-04-01 03:03:36 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:03:39.532230 | orchestrator | 2026-04-01 03:03:39 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:03:39.533921 | orchestrator | 2026-04-01 03:03:39 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:03:39.534006 | orchestrator | 2026-04-01 03:03:39 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:03:42.600116 | orchestrator | 2026-04-01 03:03:42 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:03:42.602060 | orchestrator | 2026-04-01 03:03:42 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:03:42.602176 | orchestrator | 2026-04-01 03:03:42 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:03:45.651726 | orchestrator | 2026-04-01 03:03:45 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:03:45.654079 | orchestrator | 2026-04-01 03:03:45 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:03:45.654199 | orchestrator | 2026-04-01 03:03:45 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:03:48.706583 | orchestrator | 2026-04-01 03:03:48 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:03:48.707742 | orchestrator | 2026-04-01 03:03:48 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:03:48.707887 | orchestrator | 2026-04-01 03:03:48 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:03:51.757567 | orchestrator | 2026-04-01 03:03:51 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:03:51.758344 | orchestrator | 2026-04-01 03:03:51 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:03:51.758387 | orchestrator | 2026-04-01 03:03:51 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:03:54.802207 | orchestrator | 2026-04-01 03:03:54 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:03:54.802705 | orchestrator | 2026-04-01 03:03:54 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:03:54.802738 | orchestrator | 2026-04-01 03:03:54 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:03:57.852735 | orchestrator | 2026-04-01 03:03:57 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:03:57.853377 | orchestrator | 2026-04-01 03:03:57 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:03:57.853416 | orchestrator | 2026-04-01 03:03:57 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:04:00.911536 | orchestrator | 2026-04-01 03:04:00 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:04:00.914542 | orchestrator | 2026-04-01 03:04:00 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:04:00.914605 | orchestrator | 2026-04-01 03:04:00 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:04:03.959146 | orchestrator | 2026-04-01 03:04:03 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:04:03.960280 | orchestrator | 2026-04-01 03:04:03 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:04:03.960333 | orchestrator | 2026-04-01 03:04:03 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:04:07.010428 | orchestrator | 2026-04-01 03:04:07 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:04:07.012271 | orchestrator | 2026-04-01 03:04:07 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:04:07.012364 | orchestrator | 2026-04-01 03:04:07 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:04:10.064213 | orchestrator | 2026-04-01 03:04:10 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:04:10.066493 | orchestrator | 2026-04-01 03:04:10 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:04:10.066555 | orchestrator | 2026-04-01 03:04:10 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:04:13.114125 | orchestrator | 2026-04-01 03:04:13 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:04:13.116419 | orchestrator | 2026-04-01 03:04:13 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:04:13.116507 | orchestrator | 2026-04-01 03:04:13 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:04:16.162320 | orchestrator | 2026-04-01 03:04:16 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:04:16.164563 | orchestrator | 2026-04-01 03:04:16 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:04:16.164629 | orchestrator | 2026-04-01 03:04:16 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:04:19.208539 | orchestrator | 2026-04-01 03:04:19 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:04:19.209989 | orchestrator | 2026-04-01 03:04:19 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:04:19.210392 | orchestrator | 2026-04-01 03:04:19 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:04:22.252617 | orchestrator | 2026-04-01 03:04:22 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:04:22.255116 | orchestrator | 2026-04-01 03:04:22 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:04:22.255299 | orchestrator | 2026-04-01 03:04:22 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:04:25.308311 | orchestrator | 2026-04-01 03:04:25 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:04:25.309357 | orchestrator | 2026-04-01 03:04:25 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:04:25.309395 | orchestrator | 2026-04-01 03:04:25 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:04:28.365525 | orchestrator | 2026-04-01 03:04:28 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:04:28.368744 | orchestrator | 2026-04-01 03:04:28 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:04:28.368798 | orchestrator | 2026-04-01 03:04:28 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:04:31.414222 | orchestrator | 2026-04-01 03:04:31 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:04:31.416728 | orchestrator | 2026-04-01 03:04:31 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:04:31.416776 | orchestrator | 2026-04-01 03:04:31 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:04:34.455043 | orchestrator | 2026-04-01 03:04:34 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:04:34.455232 | orchestrator | 2026-04-01 03:04:34 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:04:34.455255 | orchestrator | 2026-04-01 03:04:34 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:04:37.500684 | orchestrator | 2026-04-01 03:04:37 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:04:37.500795 | orchestrator | 2026-04-01 03:04:37 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:04:37.500811 | orchestrator | 2026-04-01 03:04:37 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:04:40.549655 | orchestrator | 2026-04-01 03:04:40 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:04:40.549796 | orchestrator | 2026-04-01 03:04:40 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:04:40.549843 | orchestrator | 2026-04-01 03:04:40 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:04:43.601220 | orchestrator | 2026-04-01 03:04:43 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:04:43.603508 | orchestrator | 2026-04-01 03:04:43 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:04:43.603788 | orchestrator | 2026-04-01 03:04:43 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:04:46.651055 | orchestrator | 2026-04-01 03:04:46 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:04:46.653299 | orchestrator | 2026-04-01 03:04:46 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:04:46.653373 | orchestrator | 2026-04-01 03:04:46 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:04:49.701990 | orchestrator | 2026-04-01 03:04:49 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:04:49.703214 | orchestrator | 2026-04-01 03:04:49 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:04:49.703287 | orchestrator | 2026-04-01 03:04:49 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:04:52.752168 | orchestrator | 2026-04-01 03:04:52 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:04:52.754233 | orchestrator | 2026-04-01 03:04:52 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:04:52.754276 | orchestrator | 2026-04-01 03:04:52 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:04:55.800650 | orchestrator | 2026-04-01 03:04:55 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:04:55.802987 | orchestrator | 2026-04-01 03:04:55 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:04:55.803137 | orchestrator | 2026-04-01 03:04:55 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:04:58.853158 | orchestrator | 2026-04-01 03:04:58 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:04:58.854451 | orchestrator | 2026-04-01 03:04:58 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:04:58.854522 | orchestrator | 2026-04-01 03:04:58 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:05:01.904132 | orchestrator | 2026-04-01 03:05:01 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:05:01.906012 | orchestrator | 2026-04-01 03:05:01 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:05:01.906120 | orchestrator | 2026-04-01 03:05:01 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:05:04.954600 | orchestrator | 2026-04-01 03:05:04 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:05:04.957322 | orchestrator | 2026-04-01 03:05:04 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:05:04.957398 | orchestrator | 2026-04-01 03:05:04 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:05:08.006796 | orchestrator | 2026-04-01 03:05:08 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:05:08.008383 | orchestrator | 2026-04-01 03:05:08 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:05:08.008427 | orchestrator | 2026-04-01 03:05:08 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:05:11.053674 | orchestrator | 2026-04-01 03:05:11 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:05:11.056333 | orchestrator | 2026-04-01 03:05:11 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:05:11.056409 | orchestrator | 2026-04-01 03:05:11 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:05:14.103402 | orchestrator | 2026-04-01 03:05:14 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:05:14.104701 | orchestrator | 2026-04-01 03:05:14 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:05:14.104768 | orchestrator | 2026-04-01 03:05:14 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:05:17.149210 | orchestrator | 2026-04-01 03:05:17 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:05:17.151241 | orchestrator | 2026-04-01 03:05:17 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:05:17.151304 | orchestrator | 2026-04-01 03:05:17 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:05:20.199092 | orchestrator | 2026-04-01 03:05:20 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:05:20.201262 | orchestrator | 2026-04-01 03:05:20 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:05:20.201327 | orchestrator | 2026-04-01 03:05:20 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:05:23.253262 | orchestrator | 2026-04-01 03:05:23 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:05:23.254801 | orchestrator | 2026-04-01 03:05:23 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:05:23.254885 | orchestrator | 2026-04-01 03:05:23 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:05:26.312295 | orchestrator | 2026-04-01 03:05:26 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:05:26.314433 | orchestrator | 2026-04-01 03:05:26 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:05:26.314493 | orchestrator | 2026-04-01 03:05:26 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:05:29.368717 | orchestrator | 2026-04-01 03:05:29 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:05:29.370357 | orchestrator | 2026-04-01 03:05:29 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:05:29.370406 | orchestrator | 2026-04-01 03:05:29 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:05:32.423049 | orchestrator | 2026-04-01 03:05:32 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:05:32.424934 | orchestrator | 2026-04-01 03:05:32 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:05:32.425155 | orchestrator | 2026-04-01 03:05:32 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:05:35.479756 | orchestrator | 2026-04-01 03:05:35 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:05:35.482303 | orchestrator | 2026-04-01 03:05:35 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:05:35.482347 | orchestrator | 2026-04-01 03:05:35 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:05:38.534122 | orchestrator | 2026-04-01 03:05:38 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:05:38.534267 | orchestrator | 2026-04-01 03:05:38 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:05:38.534582 | orchestrator | 2026-04-01 03:05:38 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:05:41.586233 | orchestrator | 2026-04-01 03:05:41 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:05:41.587934 | orchestrator | 2026-04-01 03:05:41 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:05:41.588866 | orchestrator | 2026-04-01 03:05:41 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:05:44.639074 | orchestrator | 2026-04-01 03:05:44 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:05:44.641059 | orchestrator | 2026-04-01 03:05:44 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:05:44.641098 | orchestrator | 2026-04-01 03:05:44 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:05:47.691126 | orchestrator | 2026-04-01 03:05:47 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:05:47.692040 | orchestrator | 2026-04-01 03:05:47 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:05:47.692171 | orchestrator | 2026-04-01 03:05:47 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:05:50.736320 | orchestrator | 2026-04-01 03:05:50 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:05:50.738504 | orchestrator | 2026-04-01 03:05:50 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:05:50.738585 | orchestrator | 2026-04-01 03:05:50 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:05:53.788008 | orchestrator | 2026-04-01 03:05:53 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:05:53.789578 | orchestrator | 2026-04-01 03:05:53 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:05:53.789734 | orchestrator | 2026-04-01 03:05:53 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:05:56.840684 | orchestrator | 2026-04-01 03:05:56 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:05:56.841793 | orchestrator | 2026-04-01 03:05:56 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:05:56.841863 | orchestrator | 2026-04-01 03:05:56 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:05:59.892914 | orchestrator | 2026-04-01 03:05:59 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:05:59.894907 | orchestrator | 2026-04-01 03:05:59 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:05:59.894962 | orchestrator | 2026-04-01 03:05:59 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:06:02.939093 | orchestrator | 2026-04-01 03:06:02 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:06:02.940545 | orchestrator | 2026-04-01 03:06:02 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:06:02.940618 | orchestrator | 2026-04-01 03:06:02 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:06:05.990518 | orchestrator | 2026-04-01 03:06:05 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:06:05.992041 | orchestrator | 2026-04-01 03:06:05 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:06:05.992269 | orchestrator | 2026-04-01 03:06:05 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:06:09.045465 | orchestrator | 2026-04-01 03:06:09 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:06:09.047986 | orchestrator | 2026-04-01 03:06:09 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:06:09.048077 | orchestrator | 2026-04-01 03:06:09 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:06:12.102116 | orchestrator | 2026-04-01 03:06:12 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:06:12.102213 | orchestrator | 2026-04-01 03:06:12 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:06:12.102241 | orchestrator | 2026-04-01 03:06:12 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:06:15.149449 | orchestrator | 2026-04-01 03:06:15 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:06:15.152327 | orchestrator | 2026-04-01 03:06:15 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:06:15.152404 | orchestrator | 2026-04-01 03:06:15 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:06:18.194093 | orchestrator | 2026-04-01 03:06:18 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:06:18.196003 | orchestrator | 2026-04-01 03:06:18 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:06:18.196096 | orchestrator | 2026-04-01 03:06:18 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:06:21.243221 | orchestrator | 2026-04-01 03:06:21 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:06:21.243638 | orchestrator | 2026-04-01 03:06:21 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:06:21.243867 | orchestrator | 2026-04-01 03:06:21 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:06:24.295878 | orchestrator | 2026-04-01 03:06:24 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:06:24.297836 | orchestrator | 2026-04-01 03:06:24 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:06:24.297871 | orchestrator | 2026-04-01 03:06:24 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:06:27.353067 | orchestrator | 2026-04-01 03:06:27 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:06:27.353170 | orchestrator | 2026-04-01 03:06:27 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:06:27.353182 | orchestrator | 2026-04-01 03:06:27 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:06:30.404183 | orchestrator | 2026-04-01 03:06:30 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:06:30.408355 | orchestrator | 2026-04-01 03:06:30 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:06:30.408444 | orchestrator | 2026-04-01 03:06:30 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:06:33.458526 | orchestrator | 2026-04-01 03:06:33 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:06:33.460645 | orchestrator | 2026-04-01 03:06:33 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:06:33.460710 | orchestrator | 2026-04-01 03:06:33 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:06:36.507951 | orchestrator | 2026-04-01 03:06:36 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:06:36.509324 | orchestrator | 2026-04-01 03:06:36 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:06:36.509920 | orchestrator | 2026-04-01 03:06:36 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:06:39.552317 | orchestrator | 2026-04-01 03:06:39 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:06:39.553171 | orchestrator | 2026-04-01 03:06:39 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:06:39.553207 | orchestrator | 2026-04-01 03:06:39 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:06:42.609367 | orchestrator | 2026-04-01 03:06:42 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:06:42.610321 | orchestrator | 2026-04-01 03:06:42 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:06:42.610461 | orchestrator | 2026-04-01 03:06:42 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:06:45.663518 | orchestrator | 2026-04-01 03:06:45 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:06:45.665261 | orchestrator | 2026-04-01 03:06:45 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:06:45.665474 | orchestrator | 2026-04-01 03:06:45 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:06:48.712071 | orchestrator | 2026-04-01 03:06:48 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:06:48.713404 | orchestrator | 2026-04-01 03:06:48 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:06:48.713465 | orchestrator | 2026-04-01 03:06:48 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:06:51.764039 | orchestrator | 2026-04-01 03:06:51 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:06:51.767868 | orchestrator | 2026-04-01 03:06:51 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:06:51.767943 | orchestrator | 2026-04-01 03:06:51 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:06:54.810900 | orchestrator | 2026-04-01 03:06:54 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:06:54.811012 | orchestrator | 2026-04-01 03:06:54 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:06:54.811027 | orchestrator | 2026-04-01 03:06:54 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:06:57.870348 | orchestrator | 2026-04-01 03:06:57 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:06:57.870463 | orchestrator | 2026-04-01 03:06:57 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:06:57.870476 | orchestrator | 2026-04-01 03:06:57 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:07:00.916778 | orchestrator | 2026-04-01 03:07:00 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:07:00.919947 | orchestrator | 2026-04-01 03:07:00 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:07:00.920118 | orchestrator | 2026-04-01 03:07:00 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:07:03.980931 | orchestrator | 2026-04-01 03:07:03 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:07:03.984522 | orchestrator | 2026-04-01 03:07:03 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:07:03.984609 | orchestrator | 2026-04-01 03:07:03 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:07:07.037173 | orchestrator | 2026-04-01 03:07:07 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:07:07.038833 | orchestrator | 2026-04-01 03:07:07 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:07:07.038955 | orchestrator | 2026-04-01 03:07:07 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:07:10.091621 | orchestrator | 2026-04-01 03:07:10 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:07:10.093103 | orchestrator | 2026-04-01 03:07:10 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:07:10.093188 | orchestrator | 2026-04-01 03:07:10 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:07:13.144984 | orchestrator | 2026-04-01 03:07:13 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:07:13.146786 | orchestrator | 2026-04-01 03:07:13 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:07:13.146824 | orchestrator | 2026-04-01 03:07:13 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:07:16.197510 | orchestrator | 2026-04-01 03:07:16 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:07:16.200152 | orchestrator | 2026-04-01 03:07:16 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:07:16.200297 | orchestrator | 2026-04-01 03:07:16 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:07:19.253835 | orchestrator | 2026-04-01 03:07:19 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:07:19.255497 | orchestrator | 2026-04-01 03:07:19 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:07:19.255621 | orchestrator | 2026-04-01 03:07:19 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:07:22.302332 | orchestrator | 2026-04-01 03:07:22 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:07:22.306222 | orchestrator | 2026-04-01 03:07:22 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:07:22.306300 | orchestrator | 2026-04-01 03:07:22 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:07:25.353510 | orchestrator | 2026-04-01 03:07:25 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:07:25.356961 | orchestrator | 2026-04-01 03:07:25 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:07:25.357034 | orchestrator | 2026-04-01 03:07:25 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:07:28.403348 | orchestrator | 2026-04-01 03:07:28 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:07:28.404644 | orchestrator | 2026-04-01 03:07:28 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:07:28.404664 | orchestrator | 2026-04-01 03:07:28 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:07:31.450719 | orchestrator | 2026-04-01 03:07:31 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:07:31.451853 | orchestrator | 2026-04-01 03:07:31 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:07:31.451916 | orchestrator | 2026-04-01 03:07:31 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:07:34.508469 | orchestrator | 2026-04-01 03:07:34 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:07:34.510211 | orchestrator | 2026-04-01 03:07:34 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:07:34.510269 | orchestrator | 2026-04-01 03:07:34 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:07:37.552796 | orchestrator | 2026-04-01 03:07:37 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:07:37.553705 | orchestrator | 2026-04-01 03:07:37 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:07:37.553751 | orchestrator | 2026-04-01 03:07:37 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:07:40.603403 | orchestrator | 2026-04-01 03:07:40 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:07:40.604777 | orchestrator | 2026-04-01 03:07:40 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:07:40.604841 | orchestrator | 2026-04-01 03:07:40 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:07:43.653247 | orchestrator | 2026-04-01 03:07:43 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:07:43.656153 | orchestrator | 2026-04-01 03:07:43 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:07:43.656200 | orchestrator | 2026-04-01 03:07:43 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:07:46.709395 | orchestrator | 2026-04-01 03:07:46 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:07:46.710506 | orchestrator | 2026-04-01 03:07:46 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:07:46.710580 | orchestrator | 2026-04-01 03:07:46 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:07:49.756761 | orchestrator | 2026-04-01 03:07:49 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:07:49.757471 | orchestrator | 2026-04-01 03:07:49 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:07:49.757517 | orchestrator | 2026-04-01 03:07:49 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:07:52.801875 | orchestrator | 2026-04-01 03:07:52 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:07:52.803545 | orchestrator | 2026-04-01 03:07:52 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:07:52.803694 | orchestrator | 2026-04-01 03:07:52 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:07:55.852263 | orchestrator | 2026-04-01 03:07:55 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:07:55.852962 | orchestrator | 2026-04-01 03:07:55 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:07:55.853029 | orchestrator | 2026-04-01 03:07:55 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:07:58.895110 | orchestrator | 2026-04-01 03:07:58 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:07:58.896704 | orchestrator | 2026-04-01 03:07:58 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:07:58.896751 | orchestrator | 2026-04-01 03:07:58 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:08:01.945100 | orchestrator | 2026-04-01 03:08:01 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:08:01.947273 | orchestrator | 2026-04-01 03:08:01 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:08:01.947395 | orchestrator | 2026-04-01 03:08:01 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:08:05.001004 | orchestrator | 2026-04-01 03:08:05 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:08:05.003173 | orchestrator | 2026-04-01 03:08:05 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:08:05.003244 | orchestrator | 2026-04-01 03:08:05 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:08:08.048011 | orchestrator | 2026-04-01 03:08:08 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:08:08.048819 | orchestrator | 2026-04-01 03:08:08 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:08:08.048917 | orchestrator | 2026-04-01 03:08:08 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:08:11.091120 | orchestrator | 2026-04-01 03:08:11 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:08:11.092702 | orchestrator | 2026-04-01 03:08:11 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:08:11.092767 | orchestrator | 2026-04-01 03:08:11 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:08:14.139837 | orchestrator | 2026-04-01 03:08:14 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:08:14.140543 | orchestrator | 2026-04-01 03:08:14 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:08:14.140593 | orchestrator | 2026-04-01 03:08:14 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:08:17.188048 | orchestrator | 2026-04-01 03:08:17 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:08:17.188742 | orchestrator | 2026-04-01 03:08:17 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:08:17.188777 | orchestrator | 2026-04-01 03:08:17 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:08:20.234337 | orchestrator | 2026-04-01 03:08:20 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:08:20.236263 | orchestrator | 2026-04-01 03:08:20 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:08:20.236311 | orchestrator | 2026-04-01 03:08:20 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:08:23.285411 | orchestrator | 2026-04-01 03:08:23 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:08:23.287896 | orchestrator | 2026-04-01 03:08:23 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:08:23.287975 | orchestrator | 2026-04-01 03:08:23 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:08:26.333823 | orchestrator | 2026-04-01 03:08:26 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:08:26.335426 | orchestrator | 2026-04-01 03:08:26 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:08:26.335491 | orchestrator | 2026-04-01 03:08:26 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:08:29.388184 | orchestrator | 2026-04-01 03:08:29 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:08:29.389258 | orchestrator | 2026-04-01 03:08:29 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:08:29.389325 | orchestrator | 2026-04-01 03:08:29 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:08:32.435153 | orchestrator | 2026-04-01 03:08:32 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:08:32.435878 | orchestrator | 2026-04-01 03:08:32 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:08:32.435921 | orchestrator | 2026-04-01 03:08:32 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:08:35.480075 | orchestrator | 2026-04-01 03:08:35 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:08:35.481702 | orchestrator | 2026-04-01 03:08:35 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:08:35.481766 | orchestrator | 2026-04-01 03:08:35 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:08:38.530448 | orchestrator | 2026-04-01 03:08:38 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:08:38.532182 | orchestrator | 2026-04-01 03:08:38 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:08:38.532567 | orchestrator | 2026-04-01 03:08:38 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:08:41.579646 | orchestrator | 2026-04-01 03:08:41 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:08:41.579951 | orchestrator | 2026-04-01 03:08:41 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:08:41.579990 | orchestrator | 2026-04-01 03:08:41 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:08:44.621889 | orchestrator | 2026-04-01 03:08:44 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:08:44.623618 | orchestrator | 2026-04-01 03:08:44 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:08:44.623672 | orchestrator | 2026-04-01 03:08:44 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:08:47.679773 | orchestrator | 2026-04-01 03:08:47 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:08:47.681630 | orchestrator | 2026-04-01 03:08:47 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:08:47.681687 | orchestrator | 2026-04-01 03:08:47 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:08:50.732758 | orchestrator | 2026-04-01 03:08:50 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:08:50.734605 | orchestrator | 2026-04-01 03:08:50 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:08:50.734637 | orchestrator | 2026-04-01 03:08:50 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:08:53.781208 | orchestrator | 2026-04-01 03:08:53 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:08:53.782372 | orchestrator | 2026-04-01 03:08:53 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:08:53.782892 | orchestrator | 2026-04-01 03:08:53 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:08:56.831339 | orchestrator | 2026-04-01 03:08:56 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:08:56.832816 | orchestrator | 2026-04-01 03:08:56 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:08:56.833205 | orchestrator | 2026-04-01 03:08:56 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:08:59.885556 | orchestrator | 2026-04-01 03:08:59 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:08:59.886497 | orchestrator | 2026-04-01 03:08:59 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:08:59.886539 | orchestrator | 2026-04-01 03:08:59 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:09:02.937082 | orchestrator | 2026-04-01 03:09:02 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:09:02.938925 | orchestrator | 2026-04-01 03:09:02 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:09:02.939006 | orchestrator | 2026-04-01 03:09:02 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:09:05.989297 | orchestrator | 2026-04-01 03:09:05 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:09:05.990508 | orchestrator | 2026-04-01 03:09:05 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:09:05.990651 | orchestrator | 2026-04-01 03:09:05 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:09:09.036378 | orchestrator | 2026-04-01 03:09:09 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:09:09.038221 | orchestrator | 2026-04-01 03:09:09 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:09:09.038333 | orchestrator | 2026-04-01 03:09:09 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:09:12.087353 | orchestrator | 2026-04-01 03:09:12 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:09:12.088508 | orchestrator | 2026-04-01 03:09:12 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:09:12.088528 | orchestrator | 2026-04-01 03:09:12 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:09:15.546783 | orchestrator | 2026-04-01 03:09:15 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:09:15.546855 | orchestrator | 2026-04-01 03:09:15 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:09:15.546862 | orchestrator | 2026-04-01 03:09:15 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:09:18.175472 | orchestrator | 2026-04-01 03:09:18 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:09:18.176520 | orchestrator | 2026-04-01 03:09:18 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:09:18.176737 | orchestrator | 2026-04-01 03:09:18 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:09:21.225859 | orchestrator | 2026-04-01 03:09:21 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:09:21.226680 | orchestrator | 2026-04-01 03:09:21 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:09:21.226731 | orchestrator | 2026-04-01 03:09:21 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:09:24.271447 | orchestrator | 2026-04-01 03:09:24 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:09:24.272394 | orchestrator | 2026-04-01 03:09:24 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:09:24.272448 | orchestrator | 2026-04-01 03:09:24 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:09:27.314467 | orchestrator | 2026-04-01 03:09:27 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:09:27.315027 | orchestrator | 2026-04-01 03:09:27 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:09:27.315128 | orchestrator | 2026-04-01 03:09:27 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:09:30.362369 | orchestrator | 2026-04-01 03:09:30 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:09:30.363526 | orchestrator | 2026-04-01 03:09:30 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:09:30.363605 | orchestrator | 2026-04-01 03:09:30 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:09:33.408941 | orchestrator | 2026-04-01 03:09:33 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:09:33.410757 | orchestrator | 2026-04-01 03:09:33 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:09:33.410822 | orchestrator | 2026-04-01 03:09:33 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:09:36.460295 | orchestrator | 2026-04-01 03:09:36 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:09:36.461486 | orchestrator | 2026-04-01 03:09:36 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:09:36.461678 | orchestrator | 2026-04-01 03:09:36 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:09:39.504839 | orchestrator | 2026-04-01 03:09:39 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:09:39.506311 | orchestrator | 2026-04-01 03:09:39 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:09:39.506575 | orchestrator | 2026-04-01 03:09:39 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:09:42.552844 | orchestrator | 2026-04-01 03:09:42 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:09:42.553508 | orchestrator | 2026-04-01 03:09:42 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:09:42.553588 | orchestrator | 2026-04-01 03:09:42 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:09:45.597934 | orchestrator | 2026-04-01 03:09:45 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:09:45.598773 | orchestrator | 2026-04-01 03:09:45 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:09:45.598809 | orchestrator | 2026-04-01 03:09:45 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:09:48.644141 | orchestrator | 2026-04-01 03:09:48 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:09:48.645276 | orchestrator | 2026-04-01 03:09:48 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:09:48.645417 | orchestrator | 2026-04-01 03:09:48 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:09:51.696820 | orchestrator | 2026-04-01 03:09:51 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:09:51.698140 | orchestrator | 2026-04-01 03:09:51 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:09:51.698176 | orchestrator | 2026-04-01 03:09:51 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:09:54.733331 | orchestrator | 2026-04-01 03:09:54 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:09:54.734930 | orchestrator | 2026-04-01 03:09:54 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:09:54.734967 | orchestrator | 2026-04-01 03:09:54 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:09:57.778806 | orchestrator | 2026-04-01 03:09:57 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:09:57.780773 | orchestrator | 2026-04-01 03:09:57 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:09:57.780843 | orchestrator | 2026-04-01 03:09:57 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:10:00.826125 | orchestrator | 2026-04-01 03:10:00 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:10:00.827361 | orchestrator | 2026-04-01 03:10:00 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:10:00.827415 | orchestrator | 2026-04-01 03:10:00 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:10:03.877914 | orchestrator | 2026-04-01 03:10:03 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:10:03.879914 | orchestrator | 2026-04-01 03:10:03 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:10:03.879963 | orchestrator | 2026-04-01 03:10:03 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:10:06.932014 | orchestrator | 2026-04-01 03:10:06 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:10:06.933612 | orchestrator | 2026-04-01 03:10:06 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:10:06.933667 | orchestrator | 2026-04-01 03:10:06 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:10:09.982715 | orchestrator | 2026-04-01 03:10:09 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:10:09.984701 | orchestrator | 2026-04-01 03:10:09 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:10:09.984757 | orchestrator | 2026-04-01 03:10:09 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:10:13.034762 | orchestrator | 2026-04-01 03:10:13 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:10:13.036192 | orchestrator | 2026-04-01 03:10:13 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:10:13.036247 | orchestrator | 2026-04-01 03:10:13 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:10:16.088471 | orchestrator | 2026-04-01 03:10:16 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:10:16.090690 | orchestrator | 2026-04-01 03:10:16 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:10:16.090783 | orchestrator | 2026-04-01 03:10:16 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:10:19.141044 | orchestrator | 2026-04-01 03:10:19 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:10:19.143473 | orchestrator | 2026-04-01 03:10:19 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:10:19.143655 | orchestrator | 2026-04-01 03:10:19 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:10:22.192967 | orchestrator | 2026-04-01 03:10:22 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:10:22.197212 | orchestrator | 2026-04-01 03:10:22 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:10:22.197295 | orchestrator | 2026-04-01 03:10:22 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:10:25.236935 | orchestrator | 2026-04-01 03:10:25 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:10:25.238177 | orchestrator | 2026-04-01 03:10:25 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:10:25.238226 | orchestrator | 2026-04-01 03:10:25 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:10:28.279721 | orchestrator | 2026-04-01 03:10:28 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:10:28.282004 | orchestrator | 2026-04-01 03:10:28 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:10:28.282109 | orchestrator | 2026-04-01 03:10:28 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:10:31.318294 | orchestrator | 2026-04-01 03:10:31 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:10:31.319934 | orchestrator | 2026-04-01 03:10:31 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:10:31.320013 | orchestrator | 2026-04-01 03:10:31 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:10:34.364573 | orchestrator | 2026-04-01 03:10:34 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:10:34.365886 | orchestrator | 2026-04-01 03:10:34 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:10:34.365961 | orchestrator | 2026-04-01 03:10:34 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:10:37.407730 | orchestrator | 2026-04-01 03:10:37 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:10:37.408848 | orchestrator | 2026-04-01 03:10:37 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:10:37.408899 | orchestrator | 2026-04-01 03:10:37 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:10:40.455964 | orchestrator | 2026-04-01 03:10:40 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:10:40.457641 | orchestrator | 2026-04-01 03:10:40 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:10:40.457697 | orchestrator | 2026-04-01 03:10:40 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:10:43.505885 | orchestrator | 2026-04-01 03:10:43 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:10:43.507313 | orchestrator | 2026-04-01 03:10:43 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:10:43.507378 | orchestrator | 2026-04-01 03:10:43 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:10:46.560120 | orchestrator | 2026-04-01 03:10:46 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:10:46.560249 | orchestrator | 2026-04-01 03:10:46 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:10:46.560276 | orchestrator | 2026-04-01 03:10:46 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:10:49.608234 | orchestrator | 2026-04-01 03:10:49 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:10:49.609022 | orchestrator | 2026-04-01 03:10:49 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:10:49.609051 | orchestrator | 2026-04-01 03:10:49 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:10:52.660101 | orchestrator | 2026-04-01 03:10:52 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:10:52.661918 | orchestrator | 2026-04-01 03:10:52 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:10:52.662332 | orchestrator | 2026-04-01 03:10:52 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:10:55.699600 | orchestrator | 2026-04-01 03:10:55 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:10:55.700961 | orchestrator | 2026-04-01 03:10:55 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:10:55.701024 | orchestrator | 2026-04-01 03:10:55 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:10:58.750881 | orchestrator | 2026-04-01 03:10:58 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:10:58.752859 | orchestrator | 2026-04-01 03:10:58 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:10:58.752920 | orchestrator | 2026-04-01 03:10:58 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:11:01.800544 | orchestrator | 2026-04-01 03:11:01 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:11:01.802292 | orchestrator | 2026-04-01 03:11:01 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:11:01.802388 | orchestrator | 2026-04-01 03:11:01 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:11:04.850552 | orchestrator | 2026-04-01 03:11:04 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:11:04.851507 | orchestrator | 2026-04-01 03:11:04 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:11:04.851604 | orchestrator | 2026-04-01 03:11:04 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:11:07.904562 | orchestrator | 2026-04-01 03:11:07 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:11:07.905182 | orchestrator | 2026-04-01 03:11:07 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:11:07.905220 | orchestrator | 2026-04-01 03:11:07 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:11:10.958880 | orchestrator | 2026-04-01 03:11:10 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:11:10.961035 | orchestrator | 2026-04-01 03:11:10 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:11:10.961113 | orchestrator | 2026-04-01 03:11:10 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:11:14.008619 | orchestrator | 2026-04-01 03:11:14 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:11:14.013041 | orchestrator | 2026-04-01 03:11:14 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:11:14.013142 | orchestrator | 2026-04-01 03:11:14 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:11:17.069258 | orchestrator | 2026-04-01 03:11:17 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:11:17.070827 | orchestrator | 2026-04-01 03:11:17 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:11:17.070876 | orchestrator | 2026-04-01 03:11:17 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:11:20.122886 | orchestrator | 2026-04-01 03:11:20 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:11:20.124531 | orchestrator | 2026-04-01 03:11:20 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:11:20.124564 | orchestrator | 2026-04-01 03:11:20 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:11:23.171758 | orchestrator | 2026-04-01 03:11:23 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:11:23.172434 | orchestrator | 2026-04-01 03:11:23 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:11:23.173867 | orchestrator | 2026-04-01 03:11:23 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:11:26.221778 | orchestrator | 2026-04-01 03:11:26 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:11:26.224334 | orchestrator | 2026-04-01 03:11:26 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:11:26.224404 | orchestrator | 2026-04-01 03:11:26 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:11:29.271768 | orchestrator | 2026-04-01 03:11:29 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:11:29.273611 | orchestrator | 2026-04-01 03:11:29 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:11:29.273753 | orchestrator | 2026-04-01 03:11:29 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:11:32.329752 | orchestrator | 2026-04-01 03:11:32 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:11:32.331837 | orchestrator | 2026-04-01 03:11:32 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:11:32.331993 | orchestrator | 2026-04-01 03:11:32 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:11:35.381313 | orchestrator | 2026-04-01 03:11:35 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:11:35.383295 | orchestrator | 2026-04-01 03:11:35 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:11:35.383422 | orchestrator | 2026-04-01 03:11:35 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:11:38.425668 | orchestrator | 2026-04-01 03:11:38 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:11:38.427509 | orchestrator | 2026-04-01 03:11:38 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:11:38.427593 | orchestrator | 2026-04-01 03:11:38 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:11:41.478891 | orchestrator | 2026-04-01 03:11:41 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:11:41.479967 | orchestrator | 2026-04-01 03:11:41 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:11:41.480008 | orchestrator | 2026-04-01 03:11:41 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:11:44.530969 | orchestrator | 2026-04-01 03:11:44 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:11:44.531976 | orchestrator | 2026-04-01 03:11:44 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:11:44.532013 | orchestrator | 2026-04-01 03:11:44 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:11:47.585268 | orchestrator | 2026-04-01 03:11:47 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:11:47.585687 | orchestrator | 2026-04-01 03:11:47 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:11:47.585718 | orchestrator | 2026-04-01 03:11:47 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:11:50.639071 | orchestrator | 2026-04-01 03:11:50 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:11:50.640533 | orchestrator | 2026-04-01 03:11:50 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:11:50.640594 | orchestrator | 2026-04-01 03:11:50 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:11:53.692697 | orchestrator | 2026-04-01 03:11:53 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:11:53.694102 | orchestrator | 2026-04-01 03:11:53 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:11:53.694232 | orchestrator | 2026-04-01 03:11:53 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:11:56.744728 | orchestrator | 2026-04-01 03:11:56 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:11:56.746918 | orchestrator | 2026-04-01 03:11:56 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:11:56.746984 | orchestrator | 2026-04-01 03:11:56 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:11:59.796040 | orchestrator | 2026-04-01 03:11:59 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:11:59.796257 | orchestrator | 2026-04-01 03:11:59 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:11:59.796633 | orchestrator | 2026-04-01 03:11:59 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:12:02.841413 | orchestrator | 2026-04-01 03:12:02 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:12:02.843090 | orchestrator | 2026-04-01 03:12:02 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:12:02.843140 | orchestrator | 2026-04-01 03:12:02 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:12:05.888200 | orchestrator | 2026-04-01 03:12:05 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:12:05.889752 | orchestrator | 2026-04-01 03:12:05 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:12:05.889820 | orchestrator | 2026-04-01 03:12:05 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:12:08.938209 | orchestrator | 2026-04-01 03:12:08 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:12:08.939938 | orchestrator | 2026-04-01 03:12:08 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:12:08.940010 | orchestrator | 2026-04-01 03:12:08 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:12:11.988307 | orchestrator | 2026-04-01 03:12:11 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:12:11.988601 | orchestrator | 2026-04-01 03:12:11 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:12:11.988641 | orchestrator | 2026-04-01 03:12:11 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:12:15.036357 | orchestrator | 2026-04-01 03:12:15 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:12:15.037480 | orchestrator | 2026-04-01 03:12:15 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:12:15.037736 | orchestrator | 2026-04-01 03:12:15 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:12:18.078937 | orchestrator | 2026-04-01 03:12:18 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:12:18.079690 | orchestrator | 2026-04-01 03:12:18 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:12:18.079723 | orchestrator | 2026-04-01 03:12:18 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:12:21.132727 | orchestrator | 2026-04-01 03:12:21 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:12:21.132838 | orchestrator | 2026-04-01 03:12:21 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:12:21.132867 | orchestrator | 2026-04-01 03:12:21 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:12:24.171943 | orchestrator | 2026-04-01 03:12:24 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:12:24.172889 | orchestrator | 2026-04-01 03:12:24 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:12:24.172976 | orchestrator | 2026-04-01 03:12:24 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:12:27.219751 | orchestrator | 2026-04-01 03:12:27 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:12:27.222160 | orchestrator | 2026-04-01 03:12:27 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:12:27.222196 | orchestrator | 2026-04-01 03:12:27 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:12:30.261745 | orchestrator | 2026-04-01 03:12:30 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:12:30.262667 | orchestrator | 2026-04-01 03:12:30 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:12:30.262712 | orchestrator | 2026-04-01 03:12:30 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:12:33.313265 | orchestrator | 2026-04-01 03:12:33 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:12:33.316729 | orchestrator | 2026-04-01 03:12:33 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:12:33.316866 | orchestrator | 2026-04-01 03:12:33 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:12:36.363941 | orchestrator | 2026-04-01 03:12:36 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:12:36.365711 | orchestrator | 2026-04-01 03:12:36 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:12:36.365778 | orchestrator | 2026-04-01 03:12:36 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:12:39.420945 | orchestrator | 2026-04-01 03:12:39 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:12:39.422184 | orchestrator | 2026-04-01 03:12:39 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:12:39.422218 | orchestrator | 2026-04-01 03:12:39 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:12:42.472691 | orchestrator | 2026-04-01 03:12:42 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:12:42.472827 | orchestrator | 2026-04-01 03:12:42 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:12:42.472855 | orchestrator | 2026-04-01 03:12:42 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:12:45.514242 | orchestrator | 2026-04-01 03:12:45 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:12:45.515222 | orchestrator | 2026-04-01 03:12:45 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:12:45.515354 | orchestrator | 2026-04-01 03:12:45 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:12:48.556126 | orchestrator | 2026-04-01 03:12:48 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:12:48.557718 | orchestrator | 2026-04-01 03:12:48 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:12:48.557801 | orchestrator | 2026-04-01 03:12:48 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:12:51.609062 | orchestrator | 2026-04-01 03:12:51 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:12:51.610379 | orchestrator | 2026-04-01 03:12:51 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:12:51.610536 | orchestrator | 2026-04-01 03:12:51 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:12:54.662719 | orchestrator | 2026-04-01 03:12:54 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:12:54.666373 | orchestrator | 2026-04-01 03:12:54 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:12:54.666454 | orchestrator | 2026-04-01 03:12:54 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:12:57.712327 | orchestrator | 2026-04-01 03:12:57 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:12:57.714339 | orchestrator | 2026-04-01 03:12:57 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:12:57.714459 | orchestrator | 2026-04-01 03:12:57 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:13:00.761774 | orchestrator | 2026-04-01 03:13:00 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:13:00.763700 | orchestrator | 2026-04-01 03:13:00 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:13:00.764061 | orchestrator | 2026-04-01 03:13:00 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:13:03.815726 | orchestrator | 2026-04-01 03:13:03 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:13:03.816716 | orchestrator | 2026-04-01 03:13:03 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:13:03.816756 | orchestrator | 2026-04-01 03:13:03 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:13:06.869997 | orchestrator | 2026-04-01 03:13:06 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:13:06.871119 | orchestrator | 2026-04-01 03:13:06 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:13:06.871262 | orchestrator | 2026-04-01 03:13:06 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:13:09.921776 | orchestrator | 2026-04-01 03:13:09 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:13:09.922575 | orchestrator | 2026-04-01 03:13:09 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:13:09.922621 | orchestrator | 2026-04-01 03:13:09 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:13:12.973718 | orchestrator | 2026-04-01 03:13:12 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:13:12.973906 | orchestrator | 2026-04-01 03:13:12 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:13:12.973927 | orchestrator | 2026-04-01 03:13:12 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:13:16.024091 | orchestrator | 2026-04-01 03:13:16 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:13:16.024173 | orchestrator | 2026-04-01 03:13:16 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:13:16.024183 | orchestrator | 2026-04-01 03:13:16 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:13:19.068964 | orchestrator | 2026-04-01 03:13:19 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:13:19.069227 | orchestrator | 2026-04-01 03:13:19 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:13:19.069254 | orchestrator | 2026-04-01 03:13:19 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:13:22.115039 | orchestrator | 2026-04-01 03:13:22 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:13:22.115158 | orchestrator | 2026-04-01 03:13:22 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:13:22.115167 | orchestrator | 2026-04-01 03:13:22 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:13:25.165833 | orchestrator | 2026-04-01 03:13:25 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:13:25.166965 | orchestrator | 2026-04-01 03:13:25 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:13:25.167002 | orchestrator | 2026-04-01 03:13:25 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:13:28.214475 | orchestrator | 2026-04-01 03:13:28 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:13:28.217311 | orchestrator | 2026-04-01 03:13:28 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:13:28.217536 | orchestrator | 2026-04-01 03:13:28 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:13:31.265654 | orchestrator | 2026-04-01 03:13:31 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:13:31.266784 | orchestrator | 2026-04-01 03:13:31 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:13:31.266879 | orchestrator | 2026-04-01 03:13:31 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:13:34.311737 | orchestrator | 2026-04-01 03:13:34 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:13:34.313332 | orchestrator | 2026-04-01 03:13:34 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:13:34.313373 | orchestrator | 2026-04-01 03:13:34 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:13:37.359769 | orchestrator | 2026-04-01 03:13:37 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:13:37.361739 | orchestrator | 2026-04-01 03:13:37 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:13:37.362061 | orchestrator | 2026-04-01 03:13:37 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:13:40.416875 | orchestrator | 2026-04-01 03:13:40 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:13:40.417259 | orchestrator | 2026-04-01 03:13:40 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:13:40.417282 | orchestrator | 2026-04-01 03:13:40 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:13:43.468565 | orchestrator | 2026-04-01 03:13:43 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:13:43.470313 | orchestrator | 2026-04-01 03:13:43 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:13:43.470359 | orchestrator | 2026-04-01 03:13:43 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:13:46.518953 | orchestrator | 2026-04-01 03:13:46 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:13:46.520898 | orchestrator | 2026-04-01 03:13:46 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:13:46.520958 | orchestrator | 2026-04-01 03:13:46 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:13:49.571259 | orchestrator | 2026-04-01 03:13:49 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:13:49.573435 | orchestrator | 2026-04-01 03:13:49 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:13:49.573496 | orchestrator | 2026-04-01 03:13:49 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:13:52.624102 | orchestrator | 2026-04-01 03:13:52 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:13:52.625575 | orchestrator | 2026-04-01 03:13:52 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:13:52.625666 | orchestrator | 2026-04-01 03:13:52 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:13:55.674848 | orchestrator | 2026-04-01 03:13:55 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:13:55.676452 | orchestrator | 2026-04-01 03:13:55 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:13:55.676514 | orchestrator | 2026-04-01 03:13:55 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:13:58.724183 | orchestrator | 2026-04-01 03:13:58 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:13:58.724279 | orchestrator | 2026-04-01 03:13:58 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:13:58.724295 | orchestrator | 2026-04-01 03:13:58 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:14:01.767212 | orchestrator | 2026-04-01 03:14:01 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:14:01.767481 | orchestrator | 2026-04-01 03:14:01 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:14:01.767538 | orchestrator | 2026-04-01 03:14:01 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:14:04.817334 | orchestrator | 2026-04-01 03:14:04 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:14:04.818130 | orchestrator | 2026-04-01 03:14:04 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:14:04.818165 | orchestrator | 2026-04-01 03:14:04 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:14:07.867111 | orchestrator | 2026-04-01 03:14:07 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:14:07.867874 | orchestrator | 2026-04-01 03:14:07 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:14:07.867935 | orchestrator | 2026-04-01 03:14:07 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:14:10.912839 | orchestrator | 2026-04-01 03:14:10 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:14:10.913993 | orchestrator | 2026-04-01 03:14:10 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:14:10.914139 | orchestrator | 2026-04-01 03:14:10 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:14:13.961153 | orchestrator | 2026-04-01 03:14:13 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:14:13.962173 | orchestrator | 2026-04-01 03:14:13 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:14:13.962228 | orchestrator | 2026-04-01 03:14:13 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:14:17.017208 | orchestrator | 2026-04-01 03:14:17 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:14:17.019508 | orchestrator | 2026-04-01 03:14:17 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:14:17.019592 | orchestrator | 2026-04-01 03:14:17 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:14:20.075468 | orchestrator | 2026-04-01 03:14:20 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:14:20.075576 | orchestrator | 2026-04-01 03:14:20 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:14:20.075593 | orchestrator | 2026-04-01 03:14:20 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:14:23.118726 | orchestrator | 2026-04-01 03:14:23 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:14:23.120449 | orchestrator | 2026-04-01 03:14:23 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:14:23.120490 | orchestrator | 2026-04-01 03:14:23 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:14:26.172704 | orchestrator | 2026-04-01 03:14:26 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:14:26.173643 | orchestrator | 2026-04-01 03:14:26 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:14:26.173696 | orchestrator | 2026-04-01 03:14:26 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:14:29.222346 | orchestrator | 2026-04-01 03:14:29 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:14:29.223943 | orchestrator | 2026-04-01 03:14:29 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:14:29.224027 | orchestrator | 2026-04-01 03:14:29 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:14:32.273869 | orchestrator | 2026-04-01 03:14:32 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:14:32.274828 | orchestrator | 2026-04-01 03:14:32 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:14:32.274876 | orchestrator | 2026-04-01 03:14:32 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:14:35.331584 | orchestrator | 2026-04-01 03:14:35 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:14:35.331698 | orchestrator | 2026-04-01 03:14:35 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:14:35.331718 | orchestrator | 2026-04-01 03:14:35 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:14:38.377927 | orchestrator | 2026-04-01 03:14:38 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:14:38.379950 | orchestrator | 2026-04-01 03:14:38 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:14:38.380013 | orchestrator | 2026-04-01 03:14:38 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:14:41.425902 | orchestrator | 2026-04-01 03:14:41 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:14:41.426320 | orchestrator | 2026-04-01 03:14:41 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:14:41.426467 | orchestrator | 2026-04-01 03:14:41 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:14:44.476228 | orchestrator | 2026-04-01 03:14:44 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:14:44.477556 | orchestrator | 2026-04-01 03:14:44 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:14:44.477754 | orchestrator | 2026-04-01 03:14:44 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:14:47.530870 | orchestrator | 2026-04-01 03:14:47 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:14:47.532478 | orchestrator | 2026-04-01 03:14:47 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:14:47.532619 | orchestrator | 2026-04-01 03:14:47 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:14:50.583635 | orchestrator | 2026-04-01 03:14:50 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:14:50.583790 | orchestrator | 2026-04-01 03:14:50 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:14:50.583803 | orchestrator | 2026-04-01 03:14:50 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:14:53.630924 | orchestrator | 2026-04-01 03:14:53 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:14:53.634580 | orchestrator | 2026-04-01 03:14:53 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:14:53.634633 | orchestrator | 2026-04-01 03:14:53 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:14:56.683542 | orchestrator | 2026-04-01 03:14:56 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:14:56.685109 | orchestrator | 2026-04-01 03:14:56 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:14:56.685147 | orchestrator | 2026-04-01 03:14:56 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:14:59.740001 | orchestrator | 2026-04-01 03:14:59 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:14:59.743092 | orchestrator | 2026-04-01 03:14:59 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:14:59.743161 | orchestrator | 2026-04-01 03:14:59 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:15:02.797022 | orchestrator | 2026-04-01 03:15:02 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:15:02.797115 | orchestrator | 2026-04-01 03:15:02 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:15:02.797128 | orchestrator | 2026-04-01 03:15:02 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:15:05.842085 | orchestrator | 2026-04-01 03:15:05 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:15:05.842172 | orchestrator | 2026-04-01 03:15:05 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:15:05.842182 | orchestrator | 2026-04-01 03:15:05 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:15:08.889398 | orchestrator | 2026-04-01 03:15:08 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:15:08.889972 | orchestrator | 2026-04-01 03:15:08 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:15:08.890235 | orchestrator | 2026-04-01 03:15:08 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:15:11.949141 | orchestrator | 2026-04-01 03:15:11 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:15:11.949231 | orchestrator | 2026-04-01 03:15:11 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:15:11.949241 | orchestrator | 2026-04-01 03:15:11 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:15:14.999710 | orchestrator | 2026-04-01 03:15:14 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:15:15.001506 | orchestrator | 2026-04-01 03:15:15 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:15:15.001639 | orchestrator | 2026-04-01 03:15:15 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:15:18.052159 | orchestrator | 2026-04-01 03:15:18 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:15:18.053915 | orchestrator | 2026-04-01 03:15:18 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:15:18.053980 | orchestrator | 2026-04-01 03:15:18 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:15:21.105911 | orchestrator | 2026-04-01 03:15:21 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:15:21.108642 | orchestrator | 2026-04-01 03:15:21 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:15:21.108710 | orchestrator | 2026-04-01 03:15:21 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:15:24.159369 | orchestrator | 2026-04-01 03:15:24 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:15:24.161242 | orchestrator | 2026-04-01 03:15:24 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:15:24.161636 | orchestrator | 2026-04-01 03:15:24 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:15:27.208940 | orchestrator | 2026-04-01 03:15:27 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:15:27.211081 | orchestrator | 2026-04-01 03:15:27 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:15:27.211145 | orchestrator | 2026-04-01 03:15:27 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:15:30.264244 | orchestrator | 2026-04-01 03:15:30 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:15:30.264834 | orchestrator | 2026-04-01 03:15:30 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:15:30.264929 | orchestrator | 2026-04-01 03:15:30 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:15:33.308174 | orchestrator | 2026-04-01 03:15:33 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:15:33.309881 | orchestrator | 2026-04-01 03:15:33 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:15:33.310205 | orchestrator | 2026-04-01 03:15:33 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:15:36.358683 | orchestrator | 2026-04-01 03:15:36 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:15:36.360566 | orchestrator | 2026-04-01 03:15:36 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:15:36.360760 | orchestrator | 2026-04-01 03:15:36 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:15:39.409129 | orchestrator | 2026-04-01 03:15:39 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:15:39.412304 | orchestrator | 2026-04-01 03:15:39 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:15:39.412392 | orchestrator | 2026-04-01 03:15:39 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:15:42.474602 | orchestrator | 2026-04-01 03:15:42 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:15:42.474749 | orchestrator | 2026-04-01 03:15:42 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:15:42.474781 | orchestrator | 2026-04-01 03:15:42 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:15:45.514757 | orchestrator | 2026-04-01 03:15:45 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:15:45.515548 | orchestrator | 2026-04-01 03:15:45 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:15:45.515601 | orchestrator | 2026-04-01 03:15:45 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:15:48.564594 | orchestrator | 2026-04-01 03:15:48 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:15:48.566174 | orchestrator | 2026-04-01 03:15:48 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:15:48.566277 | orchestrator | 2026-04-01 03:15:48 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:15:51.621280 | orchestrator | 2026-04-01 03:15:51 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:15:51.623498 | orchestrator | 2026-04-01 03:15:51 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:15:51.623580 | orchestrator | 2026-04-01 03:15:51 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:15:54.673083 | orchestrator | 2026-04-01 03:15:54 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:15:54.673265 | orchestrator | 2026-04-01 03:15:54 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:15:54.674192 | orchestrator | 2026-04-01 03:15:54 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:15:57.722833 | orchestrator | 2026-04-01 03:15:57 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:15:57.723979 | orchestrator | 2026-04-01 03:15:57 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:15:57.724030 | orchestrator | 2026-04-01 03:15:57 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:16:00.773305 | orchestrator | 2026-04-01 03:16:00 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:16:00.775503 | orchestrator | 2026-04-01 03:16:00 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:16:00.776117 | orchestrator | 2026-04-01 03:16:00 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:16:03.824141 | orchestrator | 2026-04-01 03:16:03 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:16:03.826450 | orchestrator | 2026-04-01 03:16:03 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:16:03.826510 | orchestrator | 2026-04-01 03:16:03 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:16:06.876432 | orchestrator | 2026-04-01 03:16:06 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:16:06.877292 | orchestrator | 2026-04-01 03:16:06 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:16:06.877690 | orchestrator | 2026-04-01 03:16:06 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:16:09.927040 | orchestrator | 2026-04-01 03:16:09 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:16:09.929117 | orchestrator | 2026-04-01 03:16:09 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:16:09.929187 | orchestrator | 2026-04-01 03:16:09 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:16:12.981009 | orchestrator | 2026-04-01 03:16:12 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:16:12.982160 | orchestrator | 2026-04-01 03:16:12 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:16:12.982201 | orchestrator | 2026-04-01 03:16:12 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:16:16.027546 | orchestrator | 2026-04-01 03:16:16 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:16:16.027741 | orchestrator | 2026-04-01 03:16:16 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:16:16.027761 | orchestrator | 2026-04-01 03:16:16 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:16:19.075029 | orchestrator | 2026-04-01 03:16:19 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:16:19.076392 | orchestrator | 2026-04-01 03:16:19 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:16:19.076425 | orchestrator | 2026-04-01 03:16:19 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:16:22.120387 | orchestrator | 2026-04-01 03:16:22 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:16:22.122057 | orchestrator | 2026-04-01 03:16:22 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:16:22.122112 | orchestrator | 2026-04-01 03:16:22 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:16:25.164490 | orchestrator | 2026-04-01 03:16:25 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:16:25.164931 | orchestrator | 2026-04-01 03:16:25 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:16:25.164959 | orchestrator | 2026-04-01 03:16:25 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:16:28.213558 | orchestrator | 2026-04-01 03:16:28 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:16:28.214912 | orchestrator | 2026-04-01 03:16:28 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:16:28.214965 | orchestrator | 2026-04-01 03:16:28 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:16:31.264273 | orchestrator | 2026-04-01 03:16:31 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:16:31.266591 | orchestrator | 2026-04-01 03:16:31 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:16:31.266769 | orchestrator | 2026-04-01 03:16:31 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:16:34.318830 | orchestrator | 2026-04-01 03:16:34 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:16:34.320821 | orchestrator | 2026-04-01 03:16:34 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:16:34.320904 | orchestrator | 2026-04-01 03:16:34 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:16:37.365926 | orchestrator | 2026-04-01 03:16:37 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:16:37.367350 | orchestrator | 2026-04-01 03:16:37 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:16:37.367382 | orchestrator | 2026-04-01 03:16:37 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:16:40.414169 | orchestrator | 2026-04-01 03:16:40 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:16:40.416243 | orchestrator | 2026-04-01 03:16:40 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:16:40.416377 | orchestrator | 2026-04-01 03:16:40 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:16:43.460960 | orchestrator | 2026-04-01 03:16:43 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:16:43.463109 | orchestrator | 2026-04-01 03:16:43 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:16:43.463420 | orchestrator | 2026-04-01 03:16:43 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:16:46.514654 | orchestrator | 2026-04-01 03:16:46 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:16:46.516681 | orchestrator | 2026-04-01 03:16:46 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:16:46.516753 | orchestrator | 2026-04-01 03:16:46 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:16:49.562130 | orchestrator | 2026-04-01 03:16:49 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:16:49.564224 | orchestrator | 2026-04-01 03:16:49 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:16:49.564279 | orchestrator | 2026-04-01 03:16:49 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:16:52.606624 | orchestrator | 2026-04-01 03:16:52 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:16:52.607696 | orchestrator | 2026-04-01 03:16:52 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:16:52.607747 | orchestrator | 2026-04-01 03:16:52 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:16:55.653170 | orchestrator | 2026-04-01 03:16:55 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:16:55.653940 | orchestrator | 2026-04-01 03:16:55 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:16:55.653985 | orchestrator | 2026-04-01 03:16:55 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:16:58.701838 | orchestrator | 2026-04-01 03:16:58 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:16:58.704389 | orchestrator | 2026-04-01 03:16:58 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:16:58.704513 | orchestrator | 2026-04-01 03:16:58 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:17:01.749692 | orchestrator | 2026-04-01 03:17:01 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:17:01.751785 | orchestrator | 2026-04-01 03:17:01 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:17:01.751847 | orchestrator | 2026-04-01 03:17:01 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:17:04.809135 | orchestrator | 2026-04-01 03:17:04 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:17:04.809237 | orchestrator | 2026-04-01 03:17:04 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:17:04.809252 | orchestrator | 2026-04-01 03:17:04 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:17:07.854455 | orchestrator | 2026-04-01 03:17:07 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:17:07.855372 | orchestrator | 2026-04-01 03:17:07 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:17:07.855425 | orchestrator | 2026-04-01 03:17:07 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:17:10.902472 | orchestrator | 2026-04-01 03:17:10 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:17:10.902872 | orchestrator | 2026-04-01 03:17:10 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:17:10.902900 | orchestrator | 2026-04-01 03:17:10 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:17:13.949465 | orchestrator | 2026-04-01 03:17:13 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:17:13.951931 | orchestrator | 2026-04-01 03:17:13 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:17:13.951966 | orchestrator | 2026-04-01 03:17:13 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:17:16.993517 | orchestrator | 2026-04-01 03:17:16 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:17:16.994685 | orchestrator | 2026-04-01 03:17:16 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:17:16.994732 | orchestrator | 2026-04-01 03:17:16 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:17:20.048482 | orchestrator | 2026-04-01 03:17:20 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:17:20.049871 | orchestrator | 2026-04-01 03:17:20 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:17:20.050007 | orchestrator | 2026-04-01 03:17:20 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:17:23.101675 | orchestrator | 2026-04-01 03:17:23 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:17:23.103978 | orchestrator | 2026-04-01 03:17:23 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:17:23.104085 | orchestrator | 2026-04-01 03:17:23 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:17:26.147119 | orchestrator | 2026-04-01 03:17:26 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:17:26.149296 | orchestrator | 2026-04-01 03:17:26 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:17:26.149539 | orchestrator | 2026-04-01 03:17:26 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:17:29.196762 | orchestrator | 2026-04-01 03:17:29 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:17:29.198213 | orchestrator | 2026-04-01 03:17:29 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:17:29.198474 | orchestrator | 2026-04-01 03:17:29 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:17:32.250843 | orchestrator | 2026-04-01 03:17:32 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:17:32.252706 | orchestrator | 2026-04-01 03:17:32 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:17:32.252756 | orchestrator | 2026-04-01 03:17:32 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:17:35.303099 | orchestrator | 2026-04-01 03:17:35 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:17:35.305943 | orchestrator | 2026-04-01 03:17:35 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:17:35.306751 | orchestrator | 2026-04-01 03:17:35 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:17:38.351040 | orchestrator | 2026-04-01 03:17:38 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:17:38.351918 | orchestrator | 2026-04-01 03:17:38 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:17:38.351962 | orchestrator | 2026-04-01 03:17:38 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:17:41.410119 | orchestrator | 2026-04-01 03:17:41 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:17:41.410220 | orchestrator | 2026-04-01 03:17:41 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:17:41.410235 | orchestrator | 2026-04-01 03:17:41 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:17:44.458348 | orchestrator | 2026-04-01 03:17:44 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:17:44.459675 | orchestrator | 2026-04-01 03:17:44 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:17:44.459990 | orchestrator | 2026-04-01 03:17:44 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:17:47.511870 | orchestrator | 2026-04-01 03:17:47 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:17:47.512568 | orchestrator | 2026-04-01 03:17:47 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:17:47.512595 | orchestrator | 2026-04-01 03:17:47 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:17:50.560666 | orchestrator | 2026-04-01 03:17:50 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:17:50.562241 | orchestrator | 2026-04-01 03:17:50 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:17:50.562436 | orchestrator | 2026-04-01 03:17:50 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:17:53.612567 | orchestrator | 2026-04-01 03:17:53 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:17:53.615033 | orchestrator | 2026-04-01 03:17:53 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:17:53.615095 | orchestrator | 2026-04-01 03:17:53 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:17:56.665849 | orchestrator | 2026-04-01 03:17:56 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:17:56.666911 | orchestrator | 2026-04-01 03:17:56 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:17:56.666957 | orchestrator | 2026-04-01 03:17:56 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:17:59.721522 | orchestrator | 2026-04-01 03:17:59 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:17:59.721658 | orchestrator | 2026-04-01 03:17:59 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:17:59.721675 | orchestrator | 2026-04-01 03:17:59 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:18:02.770861 | orchestrator | 2026-04-01 03:18:02 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:18:02.772013 | orchestrator | 2026-04-01 03:18:02 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:18:02.772054 | orchestrator | 2026-04-01 03:18:02 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:18:05.824662 | orchestrator | 2026-04-01 03:18:05 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:18:05.826419 | orchestrator | 2026-04-01 03:18:05 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:18:05.826481 | orchestrator | 2026-04-01 03:18:05 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:18:08.878682 | orchestrator | 2026-04-01 03:18:08 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:18:08.878774 | orchestrator | 2026-04-01 03:18:08 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:18:08.878786 | orchestrator | 2026-04-01 03:18:08 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:18:11.926294 | orchestrator | 2026-04-01 03:18:11 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:18:11.928167 | orchestrator | 2026-04-01 03:18:11 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:18:11.928340 | orchestrator | 2026-04-01 03:18:11 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:18:14.977437 | orchestrator | 2026-04-01 03:18:14 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:18:14.978269 | orchestrator | 2026-04-01 03:18:14 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:18:14.978307 | orchestrator | 2026-04-01 03:18:14 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:18:18.028576 | orchestrator | 2026-04-01 03:18:18 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:18:18.029492 | orchestrator | 2026-04-01 03:18:18 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:18:18.029677 | orchestrator | 2026-04-01 03:18:18 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:18:21.079115 | orchestrator | 2026-04-01 03:18:21 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:18:21.081508 | orchestrator | 2026-04-01 03:18:21 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:18:21.081579 | orchestrator | 2026-04-01 03:18:21 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:18:24.126092 | orchestrator | 2026-04-01 03:18:24 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:18:24.126925 | orchestrator | 2026-04-01 03:18:24 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:18:24.126964 | orchestrator | 2026-04-01 03:18:24 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:18:27.173821 | orchestrator | 2026-04-01 03:18:27 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:18:27.174062 | orchestrator | 2026-04-01 03:18:27 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:18:27.174086 | orchestrator | 2026-04-01 03:18:27 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:18:30.218951 | orchestrator | 2026-04-01 03:18:30 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:18:30.219062 | orchestrator | 2026-04-01 03:18:30 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:18:30.219090 | orchestrator | 2026-04-01 03:18:30 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:18:33.258383 | orchestrator | 2026-04-01 03:18:33 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:18:33.259194 | orchestrator | 2026-04-01 03:18:33 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:18:33.259406 | orchestrator | 2026-04-01 03:18:33 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:18:36.311435 | orchestrator | 2026-04-01 03:18:36 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:18:36.313565 | orchestrator | 2026-04-01 03:18:36 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:18:36.313731 | orchestrator | 2026-04-01 03:18:36 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:18:39.362335 | orchestrator | 2026-04-01 03:18:39 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:18:39.362500 | orchestrator | 2026-04-01 03:18:39 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:18:39.362510 | orchestrator | 2026-04-01 03:18:39 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:18:42.415702 | orchestrator | 2026-04-01 03:18:42 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:18:42.416693 | orchestrator | 2026-04-01 03:18:42 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:18:42.416734 | orchestrator | 2026-04-01 03:18:42 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:18:45.467739 | orchestrator | 2026-04-01 03:18:45 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:18:45.468698 | orchestrator | 2026-04-01 03:18:45 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:18:45.468729 | orchestrator | 2026-04-01 03:18:45 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:18:48.507260 | orchestrator | 2026-04-01 03:18:48 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:18:48.509391 | orchestrator | 2026-04-01 03:18:48 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:18:48.509442 | orchestrator | 2026-04-01 03:18:48 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:18:51.554289 | orchestrator | 2026-04-01 03:18:51 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:18:51.555871 | orchestrator | 2026-04-01 03:18:51 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:18:51.555983 | orchestrator | 2026-04-01 03:18:51 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:18:54.606867 | orchestrator | 2026-04-01 03:18:54 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:18:54.606971 | orchestrator | 2026-04-01 03:18:54 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:18:54.606983 | orchestrator | 2026-04-01 03:18:54 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:18:57.656763 | orchestrator | 2026-04-01 03:18:57 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:18:57.658932 | orchestrator | 2026-04-01 03:18:57 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:18:57.658989 | orchestrator | 2026-04-01 03:18:57 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:19:00.710289 | orchestrator | 2026-04-01 03:19:00 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:19:00.713567 | orchestrator | 2026-04-01 03:19:00 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:19:00.713621 | orchestrator | 2026-04-01 03:19:00 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:19:03.766743 | orchestrator | 2026-04-01 03:19:03 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:19:03.768475 | orchestrator | 2026-04-01 03:19:03 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:19:03.768522 | orchestrator | 2026-04-01 03:19:03 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:19:06.817761 | orchestrator | 2026-04-01 03:19:06 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:19:06.818955 | orchestrator | 2026-04-01 03:19:06 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:19:06.818991 | orchestrator | 2026-04-01 03:19:06 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:19:09.867639 | orchestrator | 2026-04-01 03:19:09 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:19:09.869185 | orchestrator | 2026-04-01 03:19:09 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:19:09.869309 | orchestrator | 2026-04-01 03:19:09 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:19:12.909820 | orchestrator | 2026-04-01 03:19:12 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:19:12.910547 | orchestrator | 2026-04-01 03:19:12 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:19:12.910610 | orchestrator | 2026-04-01 03:19:12 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:19:15.952873 | orchestrator | 2026-04-01 03:19:15 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:19:15.953771 | orchestrator | 2026-04-01 03:19:15 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:19:15.953800 | orchestrator | 2026-04-01 03:19:15 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:19:19.003028 | orchestrator | 2026-04-01 03:19:19 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:19:19.004046 | orchestrator | 2026-04-01 03:19:19 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:19:19.004090 | orchestrator | 2026-04-01 03:19:19 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:19:22.044497 | orchestrator | 2026-04-01 03:19:22 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:19:22.045985 | orchestrator | 2026-04-01 03:19:22 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:19:22.046079 | orchestrator | 2026-04-01 03:19:22 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:19:25.088938 | orchestrator | 2026-04-01 03:19:25 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:19:25.089270 | orchestrator | 2026-04-01 03:19:25 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:19:25.089302 | orchestrator | 2026-04-01 03:19:25 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:19:28.132801 | orchestrator | 2026-04-01 03:19:28 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:19:28.133861 | orchestrator | 2026-04-01 03:19:28 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:19:28.133947 | orchestrator | 2026-04-01 03:19:28 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:19:31.177377 | orchestrator | 2026-04-01 03:19:31 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:19:31.180004 | orchestrator | 2026-04-01 03:19:31 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:19:31.180105 | orchestrator | 2026-04-01 03:19:31 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:19:34.240775 | orchestrator | 2026-04-01 03:19:34 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:19:34.241330 | orchestrator | 2026-04-01 03:19:34 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:19:34.241369 | orchestrator | 2026-04-01 03:19:34 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:19:37.292102 | orchestrator | 2026-04-01 03:19:37 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:19:37.294487 | orchestrator | 2026-04-01 03:19:37 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:19:37.294563 | orchestrator | 2026-04-01 03:19:37 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:19:40.345517 | orchestrator | 2026-04-01 03:19:40 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:19:40.346251 | orchestrator | 2026-04-01 03:19:40 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:19:40.346485 | orchestrator | 2026-04-01 03:19:40 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:19:43.390745 | orchestrator | 2026-04-01 03:19:43 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:19:43.391951 | orchestrator | 2026-04-01 03:19:43 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:19:43.391997 | orchestrator | 2026-04-01 03:19:43 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:19:46.435932 | orchestrator | 2026-04-01 03:19:46 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:19:46.437692 | orchestrator | 2026-04-01 03:19:46 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:19:46.437813 | orchestrator | 2026-04-01 03:19:46 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:19:49.486300 | orchestrator | 2026-04-01 03:19:49 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:19:49.487356 | orchestrator | 2026-04-01 03:19:49 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:19:49.487391 | orchestrator | 2026-04-01 03:19:49 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:19:52.528328 | orchestrator | 2026-04-01 03:19:52 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:19:52.530662 | orchestrator | 2026-04-01 03:19:52 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:19:52.530739 | orchestrator | 2026-04-01 03:19:52 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:19:55.585722 | orchestrator | 2026-04-01 03:19:55 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:19:55.587607 | orchestrator | 2026-04-01 03:19:55 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:19:55.587715 | orchestrator | 2026-04-01 03:19:55 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:19:58.639325 | orchestrator | 2026-04-01 03:19:58 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:19:58.640674 | orchestrator | 2026-04-01 03:19:58 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:19:58.640759 | orchestrator | 2026-04-01 03:19:58 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:20:01.695437 | orchestrator | 2026-04-01 03:20:01 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:20:01.696521 | orchestrator | 2026-04-01 03:20:01 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:20:01.696703 | orchestrator | 2026-04-01 03:20:01 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:20:04.746806 | orchestrator | 2026-04-01 03:20:04 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:20:04.747763 | orchestrator | 2026-04-01 03:20:04 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:20:04.747803 | orchestrator | 2026-04-01 03:20:04 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:20:07.802809 | orchestrator | 2026-04-01 03:20:07 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:20:07.804680 | orchestrator | 2026-04-01 03:20:07 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:20:07.804732 | orchestrator | 2026-04-01 03:20:07 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:20:10.858062 | orchestrator | 2026-04-01 03:20:10 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:20:10.860131 | orchestrator | 2026-04-01 03:20:10 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:20:10.860217 | orchestrator | 2026-04-01 03:20:10 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:20:13.918225 | orchestrator | 2026-04-01 03:20:13 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:20:13.918320 | orchestrator | 2026-04-01 03:20:13 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:20:13.918398 | orchestrator | 2026-04-01 03:20:13 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:20:16.971604 | orchestrator | 2026-04-01 03:20:16 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:20:16.973050 | orchestrator | 2026-04-01 03:20:16 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:20:16.973095 | orchestrator | 2026-04-01 03:20:16 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:20:20.023482 | orchestrator | 2026-04-01 03:20:20 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:20:20.026575 | orchestrator | 2026-04-01 03:20:20 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:20:20.026716 | orchestrator | 2026-04-01 03:20:20 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:20:23.075399 | orchestrator | 2026-04-01 03:20:23 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:20:23.078474 | orchestrator | 2026-04-01 03:20:23 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:20:23.078535 | orchestrator | 2026-04-01 03:20:23 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:20:26.132232 | orchestrator | 2026-04-01 03:20:26 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:20:26.133586 | orchestrator | 2026-04-01 03:20:26 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:20:26.133764 | orchestrator | 2026-04-01 03:20:26 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:20:29.189635 | orchestrator | 2026-04-01 03:20:29 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:20:29.192981 | orchestrator | 2026-04-01 03:20:29 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:20:29.193069 | orchestrator | 2026-04-01 03:20:29 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:20:32.242387 | orchestrator | 2026-04-01 03:20:32 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:20:32.244056 | orchestrator | 2026-04-01 03:20:32 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:20:32.244117 | orchestrator | 2026-04-01 03:20:32 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:20:35.293447 | orchestrator | 2026-04-01 03:20:35 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:20:35.295180 | orchestrator | 2026-04-01 03:20:35 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:20:35.295235 | orchestrator | 2026-04-01 03:20:35 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:20:38.345002 | orchestrator | 2026-04-01 03:20:38 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:20:38.345949 | orchestrator | 2026-04-01 03:20:38 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:20:38.345991 | orchestrator | 2026-04-01 03:20:38 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:20:41.406647 | orchestrator | 2026-04-01 03:20:41 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:20:41.407312 | orchestrator | 2026-04-01 03:20:41 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:20:41.407385 | orchestrator | 2026-04-01 03:20:41 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:20:44.463449 | orchestrator | 2026-04-01 03:20:44 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:20:44.464130 | orchestrator | 2026-04-01 03:20:44 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:20:44.464186 | orchestrator | 2026-04-01 03:20:44 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:20:47.513024 | orchestrator | 2026-04-01 03:20:47 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:20:47.513658 | orchestrator | 2026-04-01 03:20:47 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:20:47.513916 | orchestrator | 2026-04-01 03:20:47 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:20:50.560061 | orchestrator | 2026-04-01 03:20:50 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:20:50.561877 | orchestrator | 2026-04-01 03:20:50 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:20:50.561917 | orchestrator | 2026-04-01 03:20:50 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:20:53.607795 | orchestrator | 2026-04-01 03:20:53 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:20:53.609430 | orchestrator | 2026-04-01 03:20:53 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:20:53.609477 | orchestrator | 2026-04-01 03:20:53 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:20:56.659224 | orchestrator | 2026-04-01 03:20:56 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:20:56.660437 | orchestrator | 2026-04-01 03:20:56 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:20:56.660521 | orchestrator | 2026-04-01 03:20:56 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:20:59.711048 | orchestrator | 2026-04-01 03:20:59 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:20:59.712651 | orchestrator | 2026-04-01 03:20:59 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:20:59.712706 | orchestrator | 2026-04-01 03:20:59 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:21:02.763843 | orchestrator | 2026-04-01 03:21:02 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:21:02.765934 | orchestrator | 2026-04-01 03:21:02 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:21:02.765973 | orchestrator | 2026-04-01 03:21:02 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:21:05.807630 | orchestrator | 2026-04-01 03:21:05 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:21:05.808167 | orchestrator | 2026-04-01 03:21:05 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:21:05.808211 | orchestrator | 2026-04-01 03:21:05 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:21:08.854915 | orchestrator | 2026-04-01 03:21:08 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:21:08.855083 | orchestrator | 2026-04-01 03:21:08 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:21:08.855107 | orchestrator | 2026-04-01 03:21:08 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:21:11.901397 | orchestrator | 2026-04-01 03:21:11 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:21:11.903994 | orchestrator | 2026-04-01 03:21:11 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:21:11.904090 | orchestrator | 2026-04-01 03:21:11 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:21:14.957011 | orchestrator | 2026-04-01 03:21:14 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:21:14.959284 | orchestrator | 2026-04-01 03:21:14 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:21:14.959360 | orchestrator | 2026-04-01 03:21:14 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:21:18.010580 | orchestrator | 2026-04-01 03:21:18 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:21:18.012552 | orchestrator | 2026-04-01 03:21:18 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:21:18.012605 | orchestrator | 2026-04-01 03:21:18 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:21:21.055683 | orchestrator | 2026-04-01 03:21:21 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:21:21.056039 | orchestrator | 2026-04-01 03:21:21 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:21:21.056164 | orchestrator | 2026-04-01 03:21:21 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:21:24.104968 | orchestrator | 2026-04-01 03:21:24 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:21:24.107285 | orchestrator | 2026-04-01 03:21:24 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:21:24.107359 | orchestrator | 2026-04-01 03:21:24 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:21:27.159235 | orchestrator | 2026-04-01 03:21:27 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:21:27.160350 | orchestrator | 2026-04-01 03:21:27 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:21:27.160387 | orchestrator | 2026-04-01 03:21:27 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:21:30.210860 | orchestrator | 2026-04-01 03:21:30 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:21:30.212964 | orchestrator | 2026-04-01 03:21:30 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:21:30.213063 | orchestrator | 2026-04-01 03:21:30 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:21:33.259279 | orchestrator | 2026-04-01 03:21:33 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:21:33.260289 | orchestrator | 2026-04-01 03:21:33 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:21:33.260473 | orchestrator | 2026-04-01 03:21:33 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:21:36.302825 | orchestrator | 2026-04-01 03:21:36 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:21:36.303826 | orchestrator | 2026-04-01 03:21:36 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:21:36.303870 | orchestrator | 2026-04-01 03:21:36 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:21:39.354667 | orchestrator | 2026-04-01 03:21:39 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:21:39.355959 | orchestrator | 2026-04-01 03:21:39 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:21:39.356016 | orchestrator | 2026-04-01 03:21:39 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:21:42.405325 | orchestrator | 2026-04-01 03:21:42 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:21:42.407753 | orchestrator | 2026-04-01 03:21:42 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:21:42.407850 | orchestrator | 2026-04-01 03:21:42 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:21:45.459034 | orchestrator | 2026-04-01 03:21:45 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:21:45.461541 | orchestrator | 2026-04-01 03:21:45 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:21:45.461703 | orchestrator | 2026-04-01 03:21:45 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:21:48.519512 | orchestrator | 2026-04-01 03:21:48 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:21:48.521621 | orchestrator | 2026-04-01 03:21:48 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:21:48.521710 | orchestrator | 2026-04-01 03:21:48 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:21:51.572499 | orchestrator | 2026-04-01 03:21:51 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:21:51.574066 | orchestrator | 2026-04-01 03:21:51 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:21:51.574123 | orchestrator | 2026-04-01 03:21:51 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:21:54.621014 | orchestrator | 2026-04-01 03:21:54 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:21:54.624265 | orchestrator | 2026-04-01 03:21:54 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:21:54.624360 | orchestrator | 2026-04-01 03:21:54 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:21:57.679549 | orchestrator | 2026-04-01 03:21:57 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:21:57.682976 | orchestrator | 2026-04-01 03:21:57 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:21:57.683037 | orchestrator | 2026-04-01 03:21:57 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:22:00.732229 | orchestrator | 2026-04-01 03:22:00 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:22:00.734503 | orchestrator | 2026-04-01 03:22:00 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:22:00.734536 | orchestrator | 2026-04-01 03:22:00 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:22:03.786410 | orchestrator | 2026-04-01 03:22:03 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:22:03.788309 | orchestrator | 2026-04-01 03:22:03 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:22:03.788386 | orchestrator | 2026-04-01 03:22:03 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:22:06.838072 | orchestrator | 2026-04-01 03:22:06 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:22:06.840395 | orchestrator | 2026-04-01 03:22:06 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:22:06.840427 | orchestrator | 2026-04-01 03:22:06 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:22:09.889577 | orchestrator | 2026-04-01 03:22:09 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:22:09.892079 | orchestrator | 2026-04-01 03:22:09 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:22:09.892248 | orchestrator | 2026-04-01 03:22:09 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:22:12.939336 | orchestrator | 2026-04-01 03:22:12 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:22:12.941168 | orchestrator | 2026-04-01 03:22:12 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:22:12.941280 | orchestrator | 2026-04-01 03:22:12 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:22:15.988358 | orchestrator | 2026-04-01 03:22:15 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:22:15.989593 | orchestrator | 2026-04-01 03:22:15 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:22:15.989780 | orchestrator | 2026-04-01 03:22:15 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:22:19.037209 | orchestrator | 2026-04-01 03:22:19 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:22:19.039137 | orchestrator | 2026-04-01 03:22:19 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:22:19.039276 | orchestrator | 2026-04-01 03:22:19 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:22:22.091751 | orchestrator | 2026-04-01 03:22:22 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:22:22.093453 | orchestrator | 2026-04-01 03:22:22 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:22:22.093514 | orchestrator | 2026-04-01 03:22:22 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:22:25.138956 | orchestrator | 2026-04-01 03:22:25 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:22:25.140211 | orchestrator | 2026-04-01 03:22:25 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:22:25.140236 | orchestrator | 2026-04-01 03:22:25 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:22:28.188409 | orchestrator | 2026-04-01 03:22:28 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:22:28.192721 | orchestrator | 2026-04-01 03:22:28 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:22:28.192777 | orchestrator | 2026-04-01 03:22:28 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:22:31.235853 | orchestrator | 2026-04-01 03:22:31 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:22:31.237669 | orchestrator | 2026-04-01 03:22:31 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:22:31.237740 | orchestrator | 2026-04-01 03:22:31 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:22:34.283933 | orchestrator | 2026-04-01 03:22:34 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:22:34.285318 | orchestrator | 2026-04-01 03:22:34 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:22:34.285524 | orchestrator | 2026-04-01 03:22:34 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:22:37.336269 | orchestrator | 2026-04-01 03:22:37 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:22:37.338749 | orchestrator | 2026-04-01 03:22:37 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:22:37.338798 | orchestrator | 2026-04-01 03:22:37 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:22:40.389498 | orchestrator | 2026-04-01 03:22:40 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:22:40.392194 | orchestrator | 2026-04-01 03:22:40 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:22:40.392267 | orchestrator | 2026-04-01 03:22:40 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:22:43.440670 | orchestrator | 2026-04-01 03:22:43 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:22:43.442705 | orchestrator | 2026-04-01 03:22:43 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:22:43.442745 | orchestrator | 2026-04-01 03:22:43 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:22:46.490186 | orchestrator | 2026-04-01 03:22:46 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:22:46.492228 | orchestrator | 2026-04-01 03:22:46 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:22:46.492656 | orchestrator | 2026-04-01 03:22:46 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:22:49.540676 | orchestrator | 2026-04-01 03:22:49 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:22:49.542656 | orchestrator | 2026-04-01 03:22:49 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:22:49.542706 | orchestrator | 2026-04-01 03:22:49 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:22:52.589197 | orchestrator | 2026-04-01 03:22:52 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:22:52.590786 | orchestrator | 2026-04-01 03:22:52 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:22:52.590836 | orchestrator | 2026-04-01 03:22:52 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:22:55.636914 | orchestrator | 2026-04-01 03:22:55 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:22:55.638637 | orchestrator | 2026-04-01 03:22:55 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:22:55.638690 | orchestrator | 2026-04-01 03:22:55 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:22:58.697220 | orchestrator | 2026-04-01 03:22:58 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:22:58.697351 | orchestrator | 2026-04-01 03:22:58 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:22:58.697380 | orchestrator | 2026-04-01 03:22:58 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:23:01.746371 | orchestrator | 2026-04-01 03:23:01 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:23:01.747583 | orchestrator | 2026-04-01 03:23:01 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:23:01.747688 | orchestrator | 2026-04-01 03:23:01 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:23:04.798878 | orchestrator | 2026-04-01 03:23:04 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:23:04.800904 | orchestrator | 2026-04-01 03:23:04 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:23:04.800973 | orchestrator | 2026-04-01 03:23:04 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:23:07.853164 | orchestrator | 2026-04-01 03:23:07 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:23:07.855208 | orchestrator | 2026-04-01 03:23:07 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:23:07.855248 | orchestrator | 2026-04-01 03:23:07 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:23:10.898934 | orchestrator | 2026-04-01 03:23:10 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:23:10.901487 | orchestrator | 2026-04-01 03:23:10 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:23:10.901573 | orchestrator | 2026-04-01 03:23:10 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:23:13.950181 | orchestrator | 2026-04-01 03:23:13 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:23:13.952440 | orchestrator | 2026-04-01 03:23:13 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:23:13.952756 | orchestrator | 2026-04-01 03:23:13 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:23:17.001892 | orchestrator | 2026-04-01 03:23:17 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:23:17.004192 | orchestrator | 2026-04-01 03:23:17 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:23:17.004327 | orchestrator | 2026-04-01 03:23:17 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:23:20.058773 | orchestrator | 2026-04-01 03:23:20 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:23:20.058904 | orchestrator | 2026-04-01 03:23:20 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:23:20.058921 | orchestrator | 2026-04-01 03:23:20 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:23:23.106801 | orchestrator | 2026-04-01 03:23:23 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:23:23.108204 | orchestrator | 2026-04-01 03:23:23 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:23:23.108280 | orchestrator | 2026-04-01 03:23:23 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:23:26.155666 | orchestrator | 2026-04-01 03:23:26 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:23:26.158589 | orchestrator | 2026-04-01 03:23:26 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:23:26.158643 | orchestrator | 2026-04-01 03:23:26 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:23:29.204805 | orchestrator | 2026-04-01 03:23:29 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:23:29.205174 | orchestrator | 2026-04-01 03:23:29 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:23:29.205208 | orchestrator | 2026-04-01 03:23:29 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:23:32.248195 | orchestrator | 2026-04-01 03:23:32 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:23:32.248776 | orchestrator | 2026-04-01 03:23:32 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:23:32.248815 | orchestrator | 2026-04-01 03:23:32 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:23:35.289640 | orchestrator | 2026-04-01 03:23:35 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:23:35.290324 | orchestrator | 2026-04-01 03:23:35 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:23:35.290370 | orchestrator | 2026-04-01 03:23:35 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:23:38.336742 | orchestrator | 2026-04-01 03:23:38 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:23:38.340426 | orchestrator | 2026-04-01 03:23:38 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:23:38.340494 | orchestrator | 2026-04-01 03:23:38 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:23:41.390560 | orchestrator | 2026-04-01 03:23:41 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:23:41.392902 | orchestrator | 2026-04-01 03:23:41 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:23:41.392959 | orchestrator | 2026-04-01 03:23:41 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:23:44.442300 | orchestrator | 2026-04-01 03:23:44 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:23:44.445805 | orchestrator | 2026-04-01 03:23:44 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:23:44.445834 | orchestrator | 2026-04-01 03:23:44 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:23:47.494107 | orchestrator | 2026-04-01 03:23:47 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:23:47.495205 | orchestrator | 2026-04-01 03:23:47 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:23:47.495269 | orchestrator | 2026-04-01 03:23:47 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:23:50.539354 | orchestrator | 2026-04-01 03:23:50 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:23:50.540945 | orchestrator | 2026-04-01 03:23:50 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:23:50.541091 | orchestrator | 2026-04-01 03:23:50 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:23:53.587554 | orchestrator | 2026-04-01 03:23:53 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:23:53.589453 | orchestrator | 2026-04-01 03:23:53 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:23:53.589597 | orchestrator | 2026-04-01 03:23:53 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:23:56.639117 | orchestrator | 2026-04-01 03:23:56 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:23:56.641048 | orchestrator | 2026-04-01 03:23:56 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:23:56.641359 | orchestrator | 2026-04-01 03:23:56 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:23:59.684614 | orchestrator | 2026-04-01 03:23:59 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:23:59.685654 | orchestrator | 2026-04-01 03:23:59 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:23:59.685702 | orchestrator | 2026-04-01 03:23:59 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:24:02.731408 | orchestrator | 2026-04-01 03:24:02 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:24:02.735580 | orchestrator | 2026-04-01 03:24:02 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:24:02.735644 | orchestrator | 2026-04-01 03:24:02 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:24:05.780820 | orchestrator | 2026-04-01 03:24:05 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:24:05.782940 | orchestrator | 2026-04-01 03:24:05 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:24:05.783291 | orchestrator | 2026-04-01 03:24:05 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:24:08.829754 | orchestrator | 2026-04-01 03:24:08 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:24:08.831567 | orchestrator | 2026-04-01 03:24:08 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:24:08.831608 | orchestrator | 2026-04-01 03:24:08 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:24:11.873613 | orchestrator | 2026-04-01 03:24:11 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:24:11.874979 | orchestrator | 2026-04-01 03:24:11 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:24:11.875141 | orchestrator | 2026-04-01 03:24:11 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:24:14.925530 | orchestrator | 2026-04-01 03:24:14 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:24:14.927083 | orchestrator | 2026-04-01 03:24:14 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:24:14.927125 | orchestrator | 2026-04-01 03:24:14 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:24:17.976337 | orchestrator | 2026-04-01 03:24:17 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:24:17.978433 | orchestrator | 2026-04-01 03:24:17 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:24:17.978534 | orchestrator | 2026-04-01 03:24:17 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:24:21.038620 | orchestrator | 2026-04-01 03:24:21 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:24:21.038700 | orchestrator | 2026-04-01 03:24:21 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:24:21.038710 | orchestrator | 2026-04-01 03:24:21 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:24:24.088362 | orchestrator | 2026-04-01 03:24:24 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:24:24.091327 | orchestrator | 2026-04-01 03:24:24 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:24:24.091375 | orchestrator | 2026-04-01 03:24:24 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:24:27.140157 | orchestrator | 2026-04-01 03:24:27 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:24:27.141202 | orchestrator | 2026-04-01 03:24:27 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:24:27.141256 | orchestrator | 2026-04-01 03:24:27 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:24:30.199327 | orchestrator | 2026-04-01 03:24:30 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:24:30.200531 | orchestrator | 2026-04-01 03:24:30 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:24:30.200571 | orchestrator | 2026-04-01 03:24:30 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:24:33.244975 | orchestrator | 2026-04-01 03:24:33 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:24:33.246760 | orchestrator | 2026-04-01 03:24:33 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:24:33.246844 | orchestrator | 2026-04-01 03:24:33 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:24:36.294680 | orchestrator | 2026-04-01 03:24:36 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:24:36.295914 | orchestrator | 2026-04-01 03:24:36 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:24:36.295972 | orchestrator | 2026-04-01 03:24:36 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:24:39.344478 | orchestrator | 2026-04-01 03:24:39 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:24:39.344550 | orchestrator | 2026-04-01 03:24:39 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:24:39.344570 | orchestrator | 2026-04-01 03:24:39 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:24:42.392852 | orchestrator | 2026-04-01 03:24:42 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:24:42.395565 | orchestrator | 2026-04-01 03:24:42 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:24:42.395727 | orchestrator | 2026-04-01 03:24:42 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:24:45.446353 | orchestrator | 2026-04-01 03:24:45 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:24:45.447207 | orchestrator | 2026-04-01 03:24:45 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:24:45.447239 | orchestrator | 2026-04-01 03:24:45 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:24:48.504867 | orchestrator | 2026-04-01 03:24:48 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:24:48.505663 | orchestrator | 2026-04-01 03:24:48 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:24:48.505692 | orchestrator | 2026-04-01 03:24:48 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:24:51.551809 | orchestrator | 2026-04-01 03:24:51 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:24:51.553890 | orchestrator | 2026-04-01 03:24:51 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:24:51.553949 | orchestrator | 2026-04-01 03:24:51 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:24:54.604209 | orchestrator | 2026-04-01 03:24:54 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:24:54.604333 | orchestrator | 2026-04-01 03:24:54 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:24:54.604751 | orchestrator | 2026-04-01 03:24:54 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:24:57.653671 | orchestrator | 2026-04-01 03:24:57 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:24:57.655115 | orchestrator | 2026-04-01 03:24:57 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:24:57.655171 | orchestrator | 2026-04-01 03:24:57 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:25:00.703018 | orchestrator | 2026-04-01 03:25:00 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:25:00.703994 | orchestrator | 2026-04-01 03:25:00 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:25:00.704023 | orchestrator | 2026-04-01 03:25:00 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:25:03.745468 | orchestrator | 2026-04-01 03:25:03 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:25:03.746353 | orchestrator | 2026-04-01 03:25:03 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:25:03.746413 | orchestrator | 2026-04-01 03:25:03 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:25:06.792421 | orchestrator | 2026-04-01 03:25:06 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:25:06.794264 | orchestrator | 2026-04-01 03:25:06 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:25:06.794455 | orchestrator | 2026-04-01 03:25:06 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:25:09.843449 | orchestrator | 2026-04-01 03:25:09 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:25:09.845095 | orchestrator | 2026-04-01 03:25:09 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:25:09.845154 | orchestrator | 2026-04-01 03:25:09 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:25:12.894648 | orchestrator | 2026-04-01 03:25:12 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:25:12.898308 | orchestrator | 2026-04-01 03:25:12 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:25:12.898512 | orchestrator | 2026-04-01 03:25:12 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:25:15.949685 | orchestrator | 2026-04-01 03:25:15 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:25:15.950408 | orchestrator | 2026-04-01 03:25:15 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:25:15.950457 | orchestrator | 2026-04-01 03:25:15 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:25:18.994872 | orchestrator | 2026-04-01 03:25:18 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:25:18.996282 | orchestrator | 2026-04-01 03:25:18 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:25:18.996390 | orchestrator | 2026-04-01 03:25:18 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:25:22.040753 | orchestrator | 2026-04-01 03:25:22 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:25:22.042301 | orchestrator | 2026-04-01 03:25:22 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:25:22.042384 | orchestrator | 2026-04-01 03:25:22 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:25:25.095645 | orchestrator | 2026-04-01 03:25:25 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:25:25.095836 | orchestrator | 2026-04-01 03:25:25 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:25:25.095866 | orchestrator | 2026-04-01 03:25:25 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:25:28.148544 | orchestrator | 2026-04-01 03:25:28 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:25:28.149805 | orchestrator | 2026-04-01 03:25:28 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:25:28.149863 | orchestrator | 2026-04-01 03:25:28 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:25:31.204652 | orchestrator | 2026-04-01 03:25:31 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:25:31.206316 | orchestrator | 2026-04-01 03:25:31 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:25:31.206375 | orchestrator | 2026-04-01 03:25:31 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:25:34.253378 | orchestrator | 2026-04-01 03:25:34 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:25:34.254905 | orchestrator | 2026-04-01 03:25:34 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:25:34.254991 | orchestrator | 2026-04-01 03:25:34 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:25:37.308571 | orchestrator | 2026-04-01 03:25:37 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:25:37.311002 | orchestrator | 2026-04-01 03:25:37 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:25:37.311081 | orchestrator | 2026-04-01 03:25:37 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:25:40.356266 | orchestrator | 2026-04-01 03:25:40 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:25:40.358802 | orchestrator | 2026-04-01 03:25:40 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:25:40.359036 | orchestrator | 2026-04-01 03:25:40 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:25:43.411214 | orchestrator | 2026-04-01 03:25:43 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:27:43.496636 | orchestrator | 2026-04-01 03:27:43 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:27:43.496748 | orchestrator | 2026-04-01 03:27:43 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:27:46.548024 | orchestrator | 2026-04-01 03:27:46 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:27:46.550110 | orchestrator | 2026-04-01 03:27:46 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:27:46.550168 | orchestrator | 2026-04-01 03:27:46 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:27:49.591343 | orchestrator | 2026-04-01 03:27:49 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:27:49.592868 | orchestrator | 2026-04-01 03:27:49 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:27:49.593182 | orchestrator | 2026-04-01 03:27:49 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:27:52.638988 | orchestrator | 2026-04-01 03:27:52 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:27:52.642824 | orchestrator | 2026-04-01 03:27:52 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:27:52.642963 | orchestrator | 2026-04-01 03:27:52 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:27:55.684721 | orchestrator | 2026-04-01 03:27:55 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:27:55.686192 | orchestrator | 2026-04-01 03:27:55 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:27:55.686239 | orchestrator | 2026-04-01 03:27:55 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:27:58.729983 | orchestrator | 2026-04-01 03:27:58 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:27:58.731889 | orchestrator | 2026-04-01 03:27:58 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:27:58.732149 | orchestrator | 2026-04-01 03:27:58 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:28:01.776535 | orchestrator | 2026-04-01 03:28:01 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:28:01.778316 | orchestrator | 2026-04-01 03:28:01 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:28:01.778364 | orchestrator | 2026-04-01 03:28:01 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:28:04.825340 | orchestrator | 2026-04-01 03:28:04 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:28:04.826497 | orchestrator | 2026-04-01 03:28:04 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:28:04.826618 | orchestrator | 2026-04-01 03:28:04 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:28:07.874962 | orchestrator | 2026-04-01 03:28:07 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:28:07.876319 | orchestrator | 2026-04-01 03:28:07 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:28:07.876389 | orchestrator | 2026-04-01 03:28:07 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:28:10.922177 | orchestrator | 2026-04-01 03:28:10 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:28:10.923526 | orchestrator | 2026-04-01 03:28:10 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:28:10.923705 | orchestrator | 2026-04-01 03:28:10 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:28:13.964268 | orchestrator | 2026-04-01 03:28:13 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:28:13.966616 | orchestrator | 2026-04-01 03:28:13 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:28:13.966712 | orchestrator | 2026-04-01 03:28:13 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:28:17.012223 | orchestrator | 2026-04-01 03:28:17 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:28:17.013523 | orchestrator | 2026-04-01 03:28:17 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:28:17.013600 | orchestrator | 2026-04-01 03:28:17 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:28:20.057041 | orchestrator | 2026-04-01 03:28:20 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:28:20.058942 | orchestrator | 2026-04-01 03:28:20 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:28:20.059026 | orchestrator | 2026-04-01 03:28:20 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:28:23.097691 | orchestrator | 2026-04-01 03:28:23 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:28:23.099965 | orchestrator | 2026-04-01 03:28:23 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:28:23.100006 | orchestrator | 2026-04-01 03:28:23 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:28:26.144584 | orchestrator | 2026-04-01 03:28:26 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:28:26.146440 | orchestrator | 2026-04-01 03:28:26 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:28:26.146539 | orchestrator | 2026-04-01 03:28:26 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:28:29.191080 | orchestrator | 2026-04-01 03:28:29 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:28:29.192666 | orchestrator | 2026-04-01 03:28:29 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:28:29.192727 | orchestrator | 2026-04-01 03:28:29 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:28:32.240662 | orchestrator | 2026-04-01 03:28:32 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:28:32.243652 | orchestrator | 2026-04-01 03:28:32 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:28:32.244165 | orchestrator | 2026-04-01 03:28:32 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:28:35.284079 | orchestrator | 2026-04-01 03:28:35 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:28:35.285769 | orchestrator | 2026-04-01 03:28:35 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:28:35.285807 | orchestrator | 2026-04-01 03:28:35 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:28:38.332887 | orchestrator | 2026-04-01 03:28:38 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:28:38.334779 | orchestrator | 2026-04-01 03:28:38 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:28:38.334925 | orchestrator | 2026-04-01 03:28:38 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:28:41.379710 | orchestrator | 2026-04-01 03:28:41 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:28:41.381861 | orchestrator | 2026-04-01 03:28:41 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:28:41.381923 | orchestrator | 2026-04-01 03:28:41 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:28:44.425359 | orchestrator | 2026-04-01 03:28:44 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:28:44.428296 | orchestrator | 2026-04-01 03:28:44 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:28:44.428373 | orchestrator | 2026-04-01 03:28:44 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:28:47.476098 | orchestrator | 2026-04-01 03:28:47 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:28:47.478154 | orchestrator | 2026-04-01 03:28:47 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:28:47.478229 | orchestrator | 2026-04-01 03:28:47 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:28:50.518949 | orchestrator | 2026-04-01 03:28:50 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:28:50.520109 | orchestrator | 2026-04-01 03:28:50 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:28:50.520202 | orchestrator | 2026-04-01 03:28:50 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:28:53.562165 | orchestrator | 2026-04-01 03:28:53 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:28:53.564700 | orchestrator | 2026-04-01 03:28:53 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:28:53.564775 | orchestrator | 2026-04-01 03:28:53 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:28:56.613998 | orchestrator | 2026-04-01 03:28:56 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:28:56.615559 | orchestrator | 2026-04-01 03:28:56 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:28:56.615680 | orchestrator | 2026-04-01 03:28:56 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:28:59.665771 | orchestrator | 2026-04-01 03:28:59 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:28:59.667475 | orchestrator | 2026-04-01 03:28:59 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:28:59.667572 | orchestrator | 2026-04-01 03:28:59 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:29:02.712354 | orchestrator | 2026-04-01 03:29:02 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:29:02.712512 | orchestrator | 2026-04-01 03:29:02 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:29:02.712593 | orchestrator | 2026-04-01 03:29:02 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:29:05.759255 | orchestrator | 2026-04-01 03:29:05 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:29:05.760151 | orchestrator | 2026-04-01 03:29:05 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:29:05.760220 | orchestrator | 2026-04-01 03:29:05 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:29:08.806545 | orchestrator | 2026-04-01 03:29:08 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:29:08.808375 | orchestrator | 2026-04-01 03:29:08 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:29:08.808477 | orchestrator | 2026-04-01 03:29:08 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:29:11.856392 | orchestrator | 2026-04-01 03:29:11 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:29:11.858335 | orchestrator | 2026-04-01 03:29:11 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:29:11.858425 | orchestrator | 2026-04-01 03:29:11 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:29:14.902992 | orchestrator | 2026-04-01 03:29:14 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:29:14.905689 | orchestrator | 2026-04-01 03:29:14 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:29:14.905821 | orchestrator | 2026-04-01 03:29:14 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:29:17.955016 | orchestrator | 2026-04-01 03:29:17 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:29:17.958772 | orchestrator | 2026-04-01 03:29:17 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:29:17.958898 | orchestrator | 2026-04-01 03:29:17 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:29:21.004619 | orchestrator | 2026-04-01 03:29:21 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:29:21.006871 | orchestrator | 2026-04-01 03:29:21 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:29:21.007020 | orchestrator | 2026-04-01 03:29:21 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:29:24.054344 | orchestrator | 2026-04-01 03:29:24 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:29:24.056632 | orchestrator | 2026-04-01 03:29:24 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:29:24.056715 | orchestrator | 2026-04-01 03:29:24 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:29:27.108196 | orchestrator | 2026-04-01 03:29:27 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:29:27.110175 | orchestrator | 2026-04-01 03:29:27 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:29:27.110251 | orchestrator | 2026-04-01 03:29:27 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:29:30.151628 | orchestrator | 2026-04-01 03:29:30 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:29:30.153513 | orchestrator | 2026-04-01 03:29:30 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:29:30.154225 | orchestrator | 2026-04-01 03:29:30 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:29:33.201154 | orchestrator | 2026-04-01 03:29:33 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:29:33.203768 | orchestrator | 2026-04-01 03:29:33 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:29:33.203963 | orchestrator | 2026-04-01 03:29:33 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:29:36.244759 | orchestrator | 2026-04-01 03:29:36 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:29:36.246355 | orchestrator | 2026-04-01 03:29:36 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:29:36.246421 | orchestrator | 2026-04-01 03:29:36 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:29:39.291900 | orchestrator | 2026-04-01 03:29:39 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:29:39.294144 | orchestrator | 2026-04-01 03:29:39 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:29:39.294234 | orchestrator | 2026-04-01 03:29:39 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:29:42.338307 | orchestrator | 2026-04-01 03:29:42 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:29:42.340203 | orchestrator | 2026-04-01 03:29:42 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:29:42.340285 | orchestrator | 2026-04-01 03:29:42 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:29:45.387057 | orchestrator | 2026-04-01 03:29:45 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:29:45.388527 | orchestrator | 2026-04-01 03:29:45 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:29:45.388598 | orchestrator | 2026-04-01 03:29:45 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:29:48.433966 | orchestrator | 2026-04-01 03:29:48 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:29:48.435256 | orchestrator | 2026-04-01 03:29:48 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:29:48.435348 | orchestrator | 2026-04-01 03:29:48 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:29:51.482082 | orchestrator | 2026-04-01 03:29:51 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:29:51.483302 | orchestrator | 2026-04-01 03:29:51 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:29:51.483380 | orchestrator | 2026-04-01 03:29:51 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:29:54.524897 | orchestrator | 2026-04-01 03:29:54 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:29:54.525897 | orchestrator | 2026-04-01 03:29:54 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:29:54.525947 | orchestrator | 2026-04-01 03:29:54 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:29:57.573452 | orchestrator | 2026-04-01 03:29:57 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:29:57.575257 | orchestrator | 2026-04-01 03:29:57 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:29:57.575326 | orchestrator | 2026-04-01 03:29:57 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:30:00.621013 | orchestrator | 2026-04-01 03:30:00 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:30:00.622494 | orchestrator | 2026-04-01 03:30:00 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:30:00.622546 | orchestrator | 2026-04-01 03:30:00 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:30:03.671023 | orchestrator | 2026-04-01 03:30:03 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:30:03.674604 | orchestrator | 2026-04-01 03:30:03 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:30:03.674692 | orchestrator | 2026-04-01 03:30:03 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:30:06.716893 | orchestrator | 2026-04-01 03:30:06 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:30:06.717259 | orchestrator | 2026-04-01 03:30:06 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:30:06.717503 | orchestrator | 2026-04-01 03:30:06 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:30:09.763647 | orchestrator | 2026-04-01 03:30:09 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:30:09.765051 | orchestrator | 2026-04-01 03:30:09 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:30:09.765254 | orchestrator | 2026-04-01 03:30:09 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:30:12.809335 | orchestrator | 2026-04-01 03:30:12 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:30:12.810506 | orchestrator | 2026-04-01 03:30:12 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:30:12.810525 | orchestrator | 2026-04-01 03:30:12 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:30:15.857318 | orchestrator | 2026-04-01 03:30:15 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:30:15.858887 | orchestrator | 2026-04-01 03:30:15 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:30:15.858974 | orchestrator | 2026-04-01 03:30:15 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:30:18.907156 | orchestrator | 2026-04-01 03:30:18 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:30:18.908250 | orchestrator | 2026-04-01 03:30:18 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:30:18.908667 | orchestrator | 2026-04-01 03:30:18 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:30:21.951513 | orchestrator | 2026-04-01 03:30:21 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:30:21.953354 | orchestrator | 2026-04-01 03:30:21 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:30:21.953427 | orchestrator | 2026-04-01 03:30:21 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:30:24.996595 | orchestrator | 2026-04-01 03:30:24 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:30:24.997425 | orchestrator | 2026-04-01 03:30:24 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:30:24.997633 | orchestrator | 2026-04-01 03:30:24 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:30:28.049581 | orchestrator | 2026-04-01 03:30:28 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:30:28.052373 | orchestrator | 2026-04-01 03:30:28 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:30:28.052453 | orchestrator | 2026-04-01 03:30:28 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:30:31.085312 | orchestrator | 2026-04-01 03:30:31 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:30:31.086614 | orchestrator | 2026-04-01 03:30:31 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:30:31.086682 | orchestrator | 2026-04-01 03:30:31 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:30:34.139778 | orchestrator | 2026-04-01 03:30:34 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:30:34.140926 | orchestrator | 2026-04-01 03:30:34 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:30:34.140965 | orchestrator | 2026-04-01 03:30:34 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:30:37.185567 | orchestrator | 2026-04-01 03:30:37 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:30:37.186781 | orchestrator | 2026-04-01 03:30:37 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:30:37.186850 | orchestrator | 2026-04-01 03:30:37 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:30:40.231812 | orchestrator | 2026-04-01 03:30:40 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:30:40.233469 | orchestrator | 2026-04-01 03:30:40 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:30:40.233525 | orchestrator | 2026-04-01 03:30:40 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:30:43.276464 | orchestrator | 2026-04-01 03:30:43 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:30:43.276571 | orchestrator | 2026-04-01 03:30:43 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:30:43.276583 | orchestrator | 2026-04-01 03:30:43 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:30:46.320897 | orchestrator | 2026-04-01 03:30:46 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:30:46.322843 | orchestrator | 2026-04-01 03:30:46 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:30:46.322901 | orchestrator | 2026-04-01 03:30:46 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:30:49.371205 | orchestrator | 2026-04-01 03:30:49 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:30:49.372602 | orchestrator | 2026-04-01 03:30:49 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:30:49.372654 | orchestrator | 2026-04-01 03:30:49 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:30:52.416077 | orchestrator | 2026-04-01 03:30:52 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:30:52.417501 | orchestrator | 2026-04-01 03:30:52 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:30:52.417537 | orchestrator | 2026-04-01 03:30:52 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:30:55.470676 | orchestrator | 2026-04-01 03:30:55 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:30:55.470900 | orchestrator | 2026-04-01 03:30:55 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:30:55.471375 | orchestrator | 2026-04-01 03:30:55 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:30:58.516294 | orchestrator | 2026-04-01 03:30:58 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:30:58.517677 | orchestrator | 2026-04-01 03:30:58 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:30:58.517794 | orchestrator | 2026-04-01 03:30:58 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:31:01.561223 | orchestrator | 2026-04-01 03:31:01 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:31:01.563212 | orchestrator | 2026-04-01 03:31:01 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:31:01.563262 | orchestrator | 2026-04-01 03:31:01 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:31:04.614851 | orchestrator | 2026-04-01 03:31:04 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:31:04.614949 | orchestrator | 2026-04-01 03:31:04 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:31:04.614959 | orchestrator | 2026-04-01 03:31:04 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:31:07.657507 | orchestrator | 2026-04-01 03:31:07 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:31:07.659883 | orchestrator | 2026-04-01 03:31:07 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:31:07.659959 | orchestrator | 2026-04-01 03:31:07 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:31:10.704368 | orchestrator | 2026-04-01 03:31:10 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:31:10.707121 | orchestrator | 2026-04-01 03:31:10 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:31:10.707177 | orchestrator | 2026-04-01 03:31:10 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:31:13.758071 | orchestrator | 2026-04-01 03:31:13 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:31:13.759110 | orchestrator | 2026-04-01 03:31:13 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:31:13.759222 | orchestrator | 2026-04-01 03:31:13 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:31:16.800416 | orchestrator | 2026-04-01 03:31:16 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:31:16.802003 | orchestrator | 2026-04-01 03:31:16 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:31:16.802109 | orchestrator | 2026-04-01 03:31:16 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:31:19.850933 | orchestrator | 2026-04-01 03:31:19 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:31:19.851525 | orchestrator | 2026-04-01 03:31:19 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:31:19.851963 | orchestrator | 2026-04-01 03:31:19 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:31:22.891820 | orchestrator | 2026-04-01 03:31:22 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:31:22.893173 | orchestrator | 2026-04-01 03:31:22 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:31:22.893356 | orchestrator | 2026-04-01 03:31:22 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:31:25.938316 | orchestrator | 2026-04-01 03:31:25 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:31:25.940817 | orchestrator | 2026-04-01 03:31:25 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:31:25.940892 | orchestrator | 2026-04-01 03:31:25 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:31:28.986529 | orchestrator | 2026-04-01 03:31:28 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:31:28.988420 | orchestrator | 2026-04-01 03:31:28 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:31:28.988473 | orchestrator | 2026-04-01 03:31:28 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:31:32.036457 | orchestrator | 2026-04-01 03:31:32 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:31:32.038116 | orchestrator | 2026-04-01 03:31:32 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:31:32.038185 | orchestrator | 2026-04-01 03:31:32 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:31:35.092519 | orchestrator | 2026-04-01 03:31:35 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:31:35.094305 | orchestrator | 2026-04-01 03:31:35 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:31:35.094362 | orchestrator | 2026-04-01 03:31:35 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:31:38.140108 | orchestrator | 2026-04-01 03:31:38 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:31:38.141593 | orchestrator | 2026-04-01 03:31:38 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:31:38.141739 | orchestrator | 2026-04-01 03:31:38 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:31:41.183787 | orchestrator | 2026-04-01 03:31:41 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:31:41.184350 | orchestrator | 2026-04-01 03:31:41 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:31:41.184380 | orchestrator | 2026-04-01 03:31:41 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:31:44.228671 | orchestrator | 2026-04-01 03:31:44 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:31:44.230129 | orchestrator | 2026-04-01 03:31:44 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:31:44.230273 | orchestrator | 2026-04-01 03:31:44 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:31:47.286151 | orchestrator | 2026-04-01 03:31:47 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:31:47.287408 | orchestrator | 2026-04-01 03:31:47 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:31:47.288310 | orchestrator | 2026-04-01 03:31:47 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:31:50.334069 | orchestrator | 2026-04-01 03:31:50 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:31:50.335052 | orchestrator | 2026-04-01 03:31:50 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:31:50.335077 | orchestrator | 2026-04-01 03:31:50 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:31:53.375800 | orchestrator | 2026-04-01 03:31:53 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:31:53.377831 | orchestrator | 2026-04-01 03:31:53 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:31:53.377890 | orchestrator | 2026-04-01 03:31:53 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:31:56.418579 | orchestrator | 2026-04-01 03:31:56 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:31:56.420560 | orchestrator | 2026-04-01 03:31:56 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:31:56.420710 | orchestrator | 2026-04-01 03:31:56 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:31:59.466224 | orchestrator | 2026-04-01 03:31:59 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:31:59.467881 | orchestrator | 2026-04-01 03:31:59 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:31:59.467938 | orchestrator | 2026-04-01 03:31:59 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:32:02.511934 | orchestrator | 2026-04-01 03:32:02 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:32:02.513285 | orchestrator | 2026-04-01 03:32:02 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:32:02.513324 | orchestrator | 2026-04-01 03:32:02 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:32:05.561234 | orchestrator | 2026-04-01 03:32:05 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:32:05.563102 | orchestrator | 2026-04-01 03:32:05 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:32:05.563147 | orchestrator | 2026-04-01 03:32:05 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:32:08.604369 | orchestrator | 2026-04-01 03:32:08 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:32:08.605233 | orchestrator | 2026-04-01 03:32:08 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:32:08.605288 | orchestrator | 2026-04-01 03:32:08 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:32:11.651825 | orchestrator | 2026-04-01 03:32:11 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:32:11.654300 | orchestrator | 2026-04-01 03:32:11 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:32:11.654336 | orchestrator | 2026-04-01 03:32:11 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:32:14.703788 | orchestrator | 2026-04-01 03:32:14 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:32:14.705106 | orchestrator | 2026-04-01 03:32:14 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:32:14.705164 | orchestrator | 2026-04-01 03:32:14 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:32:17.751905 | orchestrator | 2026-04-01 03:32:17 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:32:17.754830 | orchestrator | 2026-04-01 03:32:17 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:32:17.754957 | orchestrator | 2026-04-01 03:32:17 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:32:20.801507 | orchestrator | 2026-04-01 03:32:20 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:32:20.803561 | orchestrator | 2026-04-01 03:32:20 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:32:20.803601 | orchestrator | 2026-04-01 03:32:20 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:32:23.845521 | orchestrator | 2026-04-01 03:32:23 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:32:23.847304 | orchestrator | 2026-04-01 03:32:23 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:32:23.847371 | orchestrator | 2026-04-01 03:32:23 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:32:26.889239 | orchestrator | 2026-04-01 03:32:26 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:32:26.891224 | orchestrator | 2026-04-01 03:32:26 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:32:26.891283 | orchestrator | 2026-04-01 03:32:26 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:32:29.933550 | orchestrator | 2026-04-01 03:32:29 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:32:29.935421 | orchestrator | 2026-04-01 03:32:29 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:32:29.935472 | orchestrator | 2026-04-01 03:32:29 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:32:32.980941 | orchestrator | 2026-04-01 03:32:32 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:32:32.984403 | orchestrator | 2026-04-01 03:32:32 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:32:32.984481 | orchestrator | 2026-04-01 03:32:32 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:32:36.031548 | orchestrator | 2026-04-01 03:32:36 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:32:36.032156 | orchestrator | 2026-04-01 03:32:36 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:32:36.032264 | orchestrator | 2026-04-01 03:32:36 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:32:39.073851 | orchestrator | 2026-04-01 03:32:39 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:32:39.075388 | orchestrator | 2026-04-01 03:32:39 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:32:39.075444 | orchestrator | 2026-04-01 03:32:39 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:32:42.115469 | orchestrator | 2026-04-01 03:32:42 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:32:42.117914 | orchestrator | 2026-04-01 03:32:42 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:32:42.117986 | orchestrator | 2026-04-01 03:32:42 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:32:45.160402 | orchestrator | 2026-04-01 03:32:45 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:32:45.161819 | orchestrator | 2026-04-01 03:32:45 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:32:45.161898 | orchestrator | 2026-04-01 03:32:45 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:32:48.201052 | orchestrator | 2026-04-01 03:32:48 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:32:48.202739 | orchestrator | 2026-04-01 03:32:48 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:32:48.202852 | orchestrator | 2026-04-01 03:32:48 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:32:51.247059 | orchestrator | 2026-04-01 03:32:51 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:32:51.249277 | orchestrator | 2026-04-01 03:32:51 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:32:51.249370 | orchestrator | 2026-04-01 03:32:51 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:32:54.291551 | orchestrator | 2026-04-01 03:32:54 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:32:54.293719 | orchestrator | 2026-04-01 03:32:54 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:32:54.293789 | orchestrator | 2026-04-01 03:32:54 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:32:57.342838 | orchestrator | 2026-04-01 03:32:57 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:32:57.344512 | orchestrator | 2026-04-01 03:32:57 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:32:57.344716 | orchestrator | 2026-04-01 03:32:57 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:33:00.388811 | orchestrator | 2026-04-01 03:33:00 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:33:00.392312 | orchestrator | 2026-04-01 03:33:00 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:33:00.392376 | orchestrator | 2026-04-01 03:33:00 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:33:03.438389 | orchestrator | 2026-04-01 03:33:03 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:33:03.441946 | orchestrator | 2026-04-01 03:33:03 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:33:03.442162 | orchestrator | 2026-04-01 03:33:03 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:33:06.490686 | orchestrator | 2026-04-01 03:33:06 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:33:06.490920 | orchestrator | 2026-04-01 03:33:06 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:33:06.490942 | orchestrator | 2026-04-01 03:33:06 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:33:09.531472 | orchestrator | 2026-04-01 03:33:09 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:33:09.533526 | orchestrator | 2026-04-01 03:33:09 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:33:09.533983 | orchestrator | 2026-04-01 03:33:09 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:33:12.582356 | orchestrator | 2026-04-01 03:33:12 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:33:12.584322 | orchestrator | 2026-04-01 03:33:12 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:33:12.584357 | orchestrator | 2026-04-01 03:33:12 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:33:15.632721 | orchestrator | 2026-04-01 03:33:15 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:33:15.633543 | orchestrator | 2026-04-01 03:33:15 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:33:15.633699 | orchestrator | 2026-04-01 03:33:15 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:33:18.673982 | orchestrator | 2026-04-01 03:33:18 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:33:18.675898 | orchestrator | 2026-04-01 03:33:18 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:33:18.675948 | orchestrator | 2026-04-01 03:33:18 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:33:21.726679 | orchestrator | 2026-04-01 03:33:21 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:33:21.727699 | orchestrator | 2026-04-01 03:33:21 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:33:21.727804 | orchestrator | 2026-04-01 03:33:21 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:33:24.771007 | orchestrator | 2026-04-01 03:33:24 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:33:24.771100 | orchestrator | 2026-04-01 03:33:24 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:33:24.771113 | orchestrator | 2026-04-01 03:33:24 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:33:27.810531 | orchestrator | 2026-04-01 03:33:27 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:33:27.813436 | orchestrator | 2026-04-01 03:33:27 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:33:27.813485 | orchestrator | 2026-04-01 03:33:27 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:33:30.858411 | orchestrator | 2026-04-01 03:33:30 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:33:30.859403 | orchestrator | 2026-04-01 03:33:30 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:33:30.859465 | orchestrator | 2026-04-01 03:33:30 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:33:33.904057 | orchestrator | 2026-04-01 03:33:33 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:33:33.905112 | orchestrator | 2026-04-01 03:33:33 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:33:33.905340 | orchestrator | 2026-04-01 03:33:33 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:33:36.952841 | orchestrator | 2026-04-01 03:33:36 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:33:36.954570 | orchestrator | 2026-04-01 03:33:36 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:33:36.954672 | orchestrator | 2026-04-01 03:33:36 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:33:39.996062 | orchestrator | 2026-04-01 03:33:39 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:33:39.998232 | orchestrator | 2026-04-01 03:33:39 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:33:39.998327 | orchestrator | 2026-04-01 03:33:39 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:33:43.042066 | orchestrator | 2026-04-01 03:33:43 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:33:43.043925 | orchestrator | 2026-04-01 03:33:43 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:33:43.044038 | orchestrator | 2026-04-01 03:33:43 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:33:46.097142 | orchestrator | 2026-04-01 03:33:46 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:33:46.099669 | orchestrator | 2026-04-01 03:33:46 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:33:46.100109 | orchestrator | 2026-04-01 03:33:46 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:33:49.151327 | orchestrator | 2026-04-01 03:33:49 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:33:49.153710 | orchestrator | 2026-04-01 03:33:49 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:33:49.153751 | orchestrator | 2026-04-01 03:33:49 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:33:52.204485 | orchestrator | 2026-04-01 03:33:52 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:33:52.207354 | orchestrator | 2026-04-01 03:33:52 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:33:52.207449 | orchestrator | 2026-04-01 03:33:52 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:33:55.249783 | orchestrator | 2026-04-01 03:33:55 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:33:55.251096 | orchestrator | 2026-04-01 03:33:55 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:33:55.251206 | orchestrator | 2026-04-01 03:33:55 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:33:58.299227 | orchestrator | 2026-04-01 03:33:58 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:33:58.300841 | orchestrator | 2026-04-01 03:33:58 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:33:58.300923 | orchestrator | 2026-04-01 03:33:58 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:34:01.343218 | orchestrator | 2026-04-01 03:34:01 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:34:01.344781 | orchestrator | 2026-04-01 03:34:01 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:34:01.344884 | orchestrator | 2026-04-01 03:34:01 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:34:04.383394 | orchestrator | 2026-04-01 03:34:04 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:34:04.384789 | orchestrator | 2026-04-01 03:34:04 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:34:04.384833 | orchestrator | 2026-04-01 03:34:04 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:34:07.435903 | orchestrator | 2026-04-01 03:34:07 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:34:07.437516 | orchestrator | 2026-04-01 03:34:07 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:34:07.437607 | orchestrator | 2026-04-01 03:34:07 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:34:10.487349 | orchestrator | 2026-04-01 03:34:10 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:34:10.490232 | orchestrator | 2026-04-01 03:34:10 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:34:10.490313 | orchestrator | 2026-04-01 03:34:10 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:34:13.538262 | orchestrator | 2026-04-01 03:34:13 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:34:13.539648 | orchestrator | 2026-04-01 03:34:13 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:34:13.539702 | orchestrator | 2026-04-01 03:34:13 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:34:16.581366 | orchestrator | 2026-04-01 03:34:16 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:34:16.584787 | orchestrator | 2026-04-01 03:34:16 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:34:16.584840 | orchestrator | 2026-04-01 03:34:16 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:34:19.630189 | orchestrator | 2026-04-01 03:34:19 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:34:19.632021 | orchestrator | 2026-04-01 03:34:19 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:34:19.632155 | orchestrator | 2026-04-01 03:34:19 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:34:22.683504 | orchestrator | 2026-04-01 03:34:22 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:34:22.686428 | orchestrator | 2026-04-01 03:34:22 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:34:22.686615 | orchestrator | 2026-04-01 03:34:22 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:34:25.731676 | orchestrator | 2026-04-01 03:34:25 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:34:25.733503 | orchestrator | 2026-04-01 03:34:25 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:34:25.733640 | orchestrator | 2026-04-01 03:34:25 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:34:28.776126 | orchestrator | 2026-04-01 03:34:28 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:34:28.778692 | orchestrator | 2026-04-01 03:34:28 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:34:28.778735 | orchestrator | 2026-04-01 03:34:28 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:34:31.825868 | orchestrator | 2026-04-01 03:34:31 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:34:31.827020 | orchestrator | 2026-04-01 03:34:31 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:34:31.827091 | orchestrator | 2026-04-01 03:34:31 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:34:34.872277 | orchestrator | 2026-04-01 03:34:34 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:34:34.873818 | orchestrator | 2026-04-01 03:34:34 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:34:34.873908 | orchestrator | 2026-04-01 03:34:34 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:34:37.921750 | orchestrator | 2026-04-01 03:34:37 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:34:37.923439 | orchestrator | 2026-04-01 03:34:37 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:34:37.923547 | orchestrator | 2026-04-01 03:34:37 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:34:40.968715 | orchestrator | 2026-04-01 03:34:40 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:34:40.971545 | orchestrator | 2026-04-01 03:34:40 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:34:40.971664 | orchestrator | 2026-04-01 03:34:40 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:34:44.017716 | orchestrator | 2026-04-01 03:34:44 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:34:44.020740 | orchestrator | 2026-04-01 03:34:44 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:34:44.020822 | orchestrator | 2026-04-01 03:34:44 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:34:47.066145 | orchestrator | 2026-04-01 03:34:47 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:34:47.068515 | orchestrator | 2026-04-01 03:34:47 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:34:47.068756 | orchestrator | 2026-04-01 03:34:47 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:34:50.114316 | orchestrator | 2026-04-01 03:34:50 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:34:50.115578 | orchestrator | 2026-04-01 03:34:50 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:34:50.115625 | orchestrator | 2026-04-01 03:34:50 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:34:53.167402 | orchestrator | 2026-04-01 03:34:53 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:34:53.170168 | orchestrator | 2026-04-01 03:34:53 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:34:53.170232 | orchestrator | 2026-04-01 03:34:53 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:34:56.216354 | orchestrator | 2026-04-01 03:34:56 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:34:56.217016 | orchestrator | 2026-04-01 03:34:56 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:34:56.217038 | orchestrator | 2026-04-01 03:34:56 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:34:59.265311 | orchestrator | 2026-04-01 03:34:59 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:34:59.268511 | orchestrator | 2026-04-01 03:34:59 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:34:59.268642 | orchestrator | 2026-04-01 03:34:59 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:35:02.312463 | orchestrator | 2026-04-01 03:35:02 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:35:02.314489 | orchestrator | 2026-04-01 03:35:02 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:35:02.314559 | orchestrator | 2026-04-01 03:35:02 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:35:05.363665 | orchestrator | 2026-04-01 03:35:05 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:35:05.365403 | orchestrator | 2026-04-01 03:35:05 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:35:05.365459 | orchestrator | 2026-04-01 03:35:05 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:35:08.421780 | orchestrator | 2026-04-01 03:35:08 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:35:08.423907 | orchestrator | 2026-04-01 03:35:08 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:35:08.423996 | orchestrator | 2026-04-01 03:35:08 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:35:11.469809 | orchestrator | 2026-04-01 03:35:11 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:35:11.471522 | orchestrator | 2026-04-01 03:35:11 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:35:11.471745 | orchestrator | 2026-04-01 03:35:11 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:35:14.519009 | orchestrator | 2026-04-01 03:35:14 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:35:14.520667 | orchestrator | 2026-04-01 03:35:14 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:35:14.520723 | orchestrator | 2026-04-01 03:35:14 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:35:17.571519 | orchestrator | 2026-04-01 03:35:17 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:35:17.573810 | orchestrator | 2026-04-01 03:35:17 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:35:17.573888 | orchestrator | 2026-04-01 03:35:17 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:35:20.623804 | orchestrator | 2026-04-01 03:35:20 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:35:20.625856 | orchestrator | 2026-04-01 03:35:20 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:35:20.625909 | orchestrator | 2026-04-01 03:35:20 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:35:23.676894 | orchestrator | 2026-04-01 03:35:23 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:35:23.679108 | orchestrator | 2026-04-01 03:35:23 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:35:23.679151 | orchestrator | 2026-04-01 03:35:23 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:35:26.727925 | orchestrator | 2026-04-01 03:35:26 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:35:26.729267 | orchestrator | 2026-04-01 03:35:26 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:35:26.729449 | orchestrator | 2026-04-01 03:35:26 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:35:29.773237 | orchestrator | 2026-04-01 03:35:29 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:35:29.774624 | orchestrator | 2026-04-01 03:35:29 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:35:29.774699 | orchestrator | 2026-04-01 03:35:29 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:35:32.818455 | orchestrator | 2026-04-01 03:35:32 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:35:32.820806 | orchestrator | 2026-04-01 03:35:32 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:35:32.820884 | orchestrator | 2026-04-01 03:35:32 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:35:35.865752 | orchestrator | 2026-04-01 03:35:35 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:35:35.867105 | orchestrator | 2026-04-01 03:35:35 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:35:35.867176 | orchestrator | 2026-04-01 03:35:35 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:35:38.913654 | orchestrator | 2026-04-01 03:35:38 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:35:38.914714 | orchestrator | 2026-04-01 03:35:38 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:35:38.914816 | orchestrator | 2026-04-01 03:35:38 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:35:41.958149 | orchestrator | 2026-04-01 03:35:41 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:35:41.959725 | orchestrator | 2026-04-01 03:35:41 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:35:41.959779 | orchestrator | 2026-04-01 03:35:41 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:35:45.008771 | orchestrator | 2026-04-01 03:35:45 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:35:45.012146 | orchestrator | 2026-04-01 03:35:45 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:35:45.012213 | orchestrator | 2026-04-01 03:35:45 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:35:48.059132 | orchestrator | 2026-04-01 03:35:48 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:35:48.061974 | orchestrator | 2026-04-01 03:35:48 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:35:48.062146 | orchestrator | 2026-04-01 03:35:48 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:35:51.116151 | orchestrator | 2026-04-01 03:35:51 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:35:51.117691 | orchestrator | 2026-04-01 03:35:51 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:35:51.117820 | orchestrator | 2026-04-01 03:35:51 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:35:54.170300 | orchestrator | 2026-04-01 03:35:54 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:35:54.172595 | orchestrator | 2026-04-01 03:35:54 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:35:54.173015 | orchestrator | 2026-04-01 03:35:54 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:35:57.228675 | orchestrator | 2026-04-01 03:35:57 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:35:57.230686 | orchestrator | 2026-04-01 03:35:57 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:35:57.230776 | orchestrator | 2026-04-01 03:35:57 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:36:00.274699 | orchestrator | 2026-04-01 03:36:00 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:36:00.277055 | orchestrator | 2026-04-01 03:36:00 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:36:00.277525 | orchestrator | 2026-04-01 03:36:00 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:36:03.324206 | orchestrator | 2026-04-01 03:36:03 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:36:03.327335 | orchestrator | 2026-04-01 03:36:03 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:36:03.327389 | orchestrator | 2026-04-01 03:36:03 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:36:06.369976 | orchestrator | 2026-04-01 03:36:06 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:36:06.372574 | orchestrator | 2026-04-01 03:36:06 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:36:06.372667 | orchestrator | 2026-04-01 03:36:06 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:36:09.423990 | orchestrator | 2026-04-01 03:36:09 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:36:09.425156 | orchestrator | 2026-04-01 03:36:09 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:36:09.425583 | orchestrator | 2026-04-01 03:36:09 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:36:12.475118 | orchestrator | 2026-04-01 03:36:12 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:36:12.476348 | orchestrator | 2026-04-01 03:36:12 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:36:12.476417 | orchestrator | 2026-04-01 03:36:12 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:36:15.516507 | orchestrator | 2026-04-01 03:36:15 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:36:15.517356 | orchestrator | 2026-04-01 03:36:15 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:36:15.517560 | orchestrator | 2026-04-01 03:36:15 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:36:18.559355 | orchestrator | 2026-04-01 03:36:18 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:36:18.560063 | orchestrator | 2026-04-01 03:36:18 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:36:18.560184 | orchestrator | 2026-04-01 03:36:18 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:36:21.606738 | orchestrator | 2026-04-01 03:36:21 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:36:21.608544 | orchestrator | 2026-04-01 03:36:21 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:36:21.608564 | orchestrator | 2026-04-01 03:36:21 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:36:24.665465 | orchestrator | 2026-04-01 03:36:24 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:36:24.667601 | orchestrator | 2026-04-01 03:36:24 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:36:24.667643 | orchestrator | 2026-04-01 03:36:24 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:36:27.716053 | orchestrator | 2026-04-01 03:36:27 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:36:27.718161 | orchestrator | 2026-04-01 03:36:27 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:36:27.718244 | orchestrator | 2026-04-01 03:36:27 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:36:30.765540 | orchestrator | 2026-04-01 03:36:30 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:36:30.766641 | orchestrator | 2026-04-01 03:36:30 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:36:30.766687 | orchestrator | 2026-04-01 03:36:30 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:36:33.813237 | orchestrator | 2026-04-01 03:36:33 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:36:33.814993 | orchestrator | 2026-04-01 03:36:33 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:36:33.815050 | orchestrator | 2026-04-01 03:36:33 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:36:36.863959 | orchestrator | 2026-04-01 03:36:36 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:36:36.864359 | orchestrator | 2026-04-01 03:36:36 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:36:36.864393 | orchestrator | 2026-04-01 03:36:36 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:36:39.911585 | orchestrator | 2026-04-01 03:36:39 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:36:39.912998 | orchestrator | 2026-04-01 03:36:39 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:36:39.913323 | orchestrator | 2026-04-01 03:36:39 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:36:42.957958 | orchestrator | 2026-04-01 03:36:42 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:36:42.959348 | orchestrator | 2026-04-01 03:36:42 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:36:42.959376 | orchestrator | 2026-04-01 03:36:42 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:36:45.999191 | orchestrator | 2026-04-01 03:36:46 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:36:46.004986 | orchestrator | 2026-04-01 03:36:46 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:36:46.005090 | orchestrator | 2026-04-01 03:36:46 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:36:49.041082 | orchestrator | 2026-04-01 03:36:49 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:36:49.041966 | orchestrator | 2026-04-01 03:36:49 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:36:49.042064 | orchestrator | 2026-04-01 03:36:49 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:36:52.081113 | orchestrator | 2026-04-01 03:36:52 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:36:52.082974 | orchestrator | 2026-04-01 03:36:52 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:36:52.083043 | orchestrator | 2026-04-01 03:36:52 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:36:55.132274 | orchestrator | 2026-04-01 03:36:55 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:36:55.134249 | orchestrator | 2026-04-01 03:36:55 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:36:55.134299 | orchestrator | 2026-04-01 03:36:55 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:36:58.181024 | orchestrator | 2026-04-01 03:36:58 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:36:58.183029 | orchestrator | 2026-04-01 03:36:58 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:36:58.183124 | orchestrator | 2026-04-01 03:36:58 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:37:01.230363 | orchestrator | 2026-04-01 03:37:01 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:37:01.231870 | orchestrator | 2026-04-01 03:37:01 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:37:01.231908 | orchestrator | 2026-04-01 03:37:01 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:37:04.281317 | orchestrator | 2026-04-01 03:37:04 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:37:04.284386 | orchestrator | 2026-04-01 03:37:04 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:37:04.284520 | orchestrator | 2026-04-01 03:37:04 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:37:07.336103 | orchestrator | 2026-04-01 03:37:07 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:37:07.338142 | orchestrator | 2026-04-01 03:37:07 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:37:07.338205 | orchestrator | 2026-04-01 03:37:07 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:37:10.377626 | orchestrator | 2026-04-01 03:37:10 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:37:10.379626 | orchestrator | 2026-04-01 03:37:10 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:37:10.379742 | orchestrator | 2026-04-01 03:37:10 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:37:13.423880 | orchestrator | 2026-04-01 03:37:13 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:37:13.424149 | orchestrator | 2026-04-01 03:37:13 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:37:13.424186 | orchestrator | 2026-04-01 03:37:13 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:37:16.461843 | orchestrator | 2026-04-01 03:37:16 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:37:16.463363 | orchestrator | 2026-04-01 03:37:16 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:37:16.463505 | orchestrator | 2026-04-01 03:37:16 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:37:19.509897 | orchestrator | 2026-04-01 03:37:19 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:37:19.511578 | orchestrator | 2026-04-01 03:37:19 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:37:19.511639 | orchestrator | 2026-04-01 03:37:19 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:37:22.555632 | orchestrator | 2026-04-01 03:37:22 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:37:22.557087 | orchestrator | 2026-04-01 03:37:22 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:37:22.557135 | orchestrator | 2026-04-01 03:37:22 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:37:25.603691 | orchestrator | 2026-04-01 03:37:25 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:37:25.605593 | orchestrator | 2026-04-01 03:37:25 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:37:25.605649 | orchestrator | 2026-04-01 03:37:25 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:37:28.657321 | orchestrator | 2026-04-01 03:37:28 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:37:28.658676 | orchestrator | 2026-04-01 03:37:28 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:37:28.658820 | orchestrator | 2026-04-01 03:37:28 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:37:31.712403 | orchestrator | 2026-04-01 03:37:31 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:37:31.714258 | orchestrator | 2026-04-01 03:37:31 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:37:31.714330 | orchestrator | 2026-04-01 03:37:31 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:37:34.761250 | orchestrator | 2026-04-01 03:37:34 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:37:34.762724 | orchestrator | 2026-04-01 03:37:34 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:37:34.763159 | orchestrator | 2026-04-01 03:37:34 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:37:37.811131 | orchestrator | 2026-04-01 03:37:37 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:37:37.813106 | orchestrator | 2026-04-01 03:37:37 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:37:37.813135 | orchestrator | 2026-04-01 03:37:37 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:37:40.856730 | orchestrator | 2026-04-01 03:37:40 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:37:40.858749 | orchestrator | 2026-04-01 03:37:40 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:37:40.858856 | orchestrator | 2026-04-01 03:37:40 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:37:43.909638 | orchestrator | 2026-04-01 03:37:43 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:37:43.910806 | orchestrator | 2026-04-01 03:37:43 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:37:43.910873 | orchestrator | 2026-04-01 03:37:43 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:37:46.952244 | orchestrator | 2026-04-01 03:37:46 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:37:46.955073 | orchestrator | 2026-04-01 03:37:46 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:37:46.955125 | orchestrator | 2026-04-01 03:37:46 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:37:50.004679 | orchestrator | 2026-04-01 03:37:50 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:37:50.005282 | orchestrator | 2026-04-01 03:37:50 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:37:50.005331 | orchestrator | 2026-04-01 03:37:50 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:37:53.047983 | orchestrator | 2026-04-01 03:37:53 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:37:53.052537 | orchestrator | 2026-04-01 03:37:53 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:37:53.052605 | orchestrator | 2026-04-01 03:37:53 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:37:56.105566 | orchestrator | 2026-04-01 03:37:56 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:37:56.107295 | orchestrator | 2026-04-01 03:37:56 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:37:56.107534 | orchestrator | 2026-04-01 03:37:56 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:37:59.161769 | orchestrator | 2026-04-01 03:37:59 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:37:59.163022 | orchestrator | 2026-04-01 03:37:59 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:37:59.163187 | orchestrator | 2026-04-01 03:37:59 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:38:02.217385 | orchestrator | 2026-04-01 03:38:02 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:38:02.220769 | orchestrator | 2026-04-01 03:38:02 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:38:02.220824 | orchestrator | 2026-04-01 03:38:02 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:38:05.269485 | orchestrator | 2026-04-01 03:38:05 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:38:05.271544 | orchestrator | 2026-04-01 03:38:05 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:38:05.271600 | orchestrator | 2026-04-01 03:38:05 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:38:08.323567 | orchestrator | 2026-04-01 03:38:08 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:38:08.327059 | orchestrator | 2026-04-01 03:38:08 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:38:08.327131 | orchestrator | 2026-04-01 03:38:08 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:38:11.373461 | orchestrator | 2026-04-01 03:38:11 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:38:11.375082 | orchestrator | 2026-04-01 03:38:11 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:38:11.375145 | orchestrator | 2026-04-01 03:38:11 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:38:14.427658 | orchestrator | 2026-04-01 03:38:14 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:38:14.430363 | orchestrator | 2026-04-01 03:38:14 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:38:14.430503 | orchestrator | 2026-04-01 03:38:14 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:38:17.477307 | orchestrator | 2026-04-01 03:38:17 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:38:17.478760 | orchestrator | 2026-04-01 03:38:17 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:38:17.478798 | orchestrator | 2026-04-01 03:38:17 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:38:20.528315 | orchestrator | 2026-04-01 03:38:20 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:38:20.529024 | orchestrator | 2026-04-01 03:38:20 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:38:20.529063 | orchestrator | 2026-04-01 03:38:20 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:38:23.574953 | orchestrator | 2026-04-01 03:38:23 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:38:23.576604 | orchestrator | 2026-04-01 03:38:23 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:38:23.576638 | orchestrator | 2026-04-01 03:38:23 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:38:26.625706 | orchestrator | 2026-04-01 03:38:26 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:38:26.627777 | orchestrator | 2026-04-01 03:38:26 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:38:26.627873 | orchestrator | 2026-04-01 03:38:26 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:38:29.674296 | orchestrator | 2026-04-01 03:38:29 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:38:29.675717 | orchestrator | 2026-04-01 03:38:29 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:38:29.675765 | orchestrator | 2026-04-01 03:38:29 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:38:32.726920 | orchestrator | 2026-04-01 03:38:32 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:38:32.728661 | orchestrator | 2026-04-01 03:38:32 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:38:32.728760 | orchestrator | 2026-04-01 03:38:32 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:38:35.773729 | orchestrator | 2026-04-01 03:38:35 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:38:35.776251 | orchestrator | 2026-04-01 03:38:35 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:38:35.776767 | orchestrator | 2026-04-01 03:38:35 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:38:38.824239 | orchestrator | 2026-04-01 03:38:38 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:38:38.826189 | orchestrator | 2026-04-01 03:38:38 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:38:38.826282 | orchestrator | 2026-04-01 03:38:38 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:38:41.869745 | orchestrator | 2026-04-01 03:38:41 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:38:41.871702 | orchestrator | 2026-04-01 03:38:41 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:38:41.871915 | orchestrator | 2026-04-01 03:38:41 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:38:44.923001 | orchestrator | 2026-04-01 03:38:44 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:38:44.924816 | orchestrator | 2026-04-01 03:38:44 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:38:44.924887 | orchestrator | 2026-04-01 03:38:44 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:38:47.968545 | orchestrator | 2026-04-01 03:38:47 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:38:47.969093 | orchestrator | 2026-04-01 03:38:47 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:38:47.969126 | orchestrator | 2026-04-01 03:38:47 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:38:51.016975 | orchestrator | 2026-04-01 03:38:51 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:38:51.018238 | orchestrator | 2026-04-01 03:38:51 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:38:51.018413 | orchestrator | 2026-04-01 03:38:51 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:38:54.057735 | orchestrator | 2026-04-01 03:38:54 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:38:54.059869 | orchestrator | 2026-04-01 03:38:54 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:38:54.059929 | orchestrator | 2026-04-01 03:38:54 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:38:57.103899 | orchestrator | 2026-04-01 03:38:57 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:38:57.106193 | orchestrator | 2026-04-01 03:38:57 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:38:57.106297 | orchestrator | 2026-04-01 03:38:57 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:39:00.148213 | orchestrator | 2026-04-01 03:39:00 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:39:00.149229 | orchestrator | 2026-04-01 03:39:00 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:39:00.149263 | orchestrator | 2026-04-01 03:39:00 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:39:03.195844 | orchestrator | 2026-04-01 03:39:03 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:39:03.196786 | orchestrator | 2026-04-01 03:39:03 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:39:03.196929 | orchestrator | 2026-04-01 03:39:03 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:39:06.242910 | orchestrator | 2026-04-01 03:39:06 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:39:06.244880 | orchestrator | 2026-04-01 03:39:06 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:39:06.244931 | orchestrator | 2026-04-01 03:39:06 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:39:09.295949 | orchestrator | 2026-04-01 03:39:09 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:39:09.297265 | orchestrator | 2026-04-01 03:39:09 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:39:09.297308 | orchestrator | 2026-04-01 03:39:09 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:39:12.342997 | orchestrator | 2026-04-01 03:39:12 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:39:12.344934 | orchestrator | 2026-04-01 03:39:12 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:39:12.344991 | orchestrator | 2026-04-01 03:39:12 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:39:15.400737 | orchestrator | 2026-04-01 03:39:15 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:39:15.403220 | orchestrator | 2026-04-01 03:39:15 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:39:15.403256 | orchestrator | 2026-04-01 03:39:15 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:39:18.455007 | orchestrator | 2026-04-01 03:39:18 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:39:18.457420 | orchestrator | 2026-04-01 03:39:18 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:39:18.457539 | orchestrator | 2026-04-01 03:39:18 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:39:21.504835 | orchestrator | 2026-04-01 03:39:21 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:39:21.507056 | orchestrator | 2026-04-01 03:39:21 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:39:21.507126 | orchestrator | 2026-04-01 03:39:21 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:39:24.555455 | orchestrator | 2026-04-01 03:39:24 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:39:24.557021 | orchestrator | 2026-04-01 03:39:24 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:39:24.557078 | orchestrator | 2026-04-01 03:39:24 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:39:27.606890 | orchestrator | 2026-04-01 03:39:27 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:39:27.608584 | orchestrator | 2026-04-01 03:39:27 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:39:27.608641 | orchestrator | 2026-04-01 03:39:27 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:39:30.663537 | orchestrator | 2026-04-01 03:39:30 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:39:30.665242 | orchestrator | 2026-04-01 03:39:30 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:39:30.665286 | orchestrator | 2026-04-01 03:39:30 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:39:33.716511 | orchestrator | 2026-04-01 03:39:33 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:39:33.719536 | orchestrator | 2026-04-01 03:39:33 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:39:33.719595 | orchestrator | 2026-04-01 03:39:33 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:39:36.766398 | orchestrator | 2026-04-01 03:39:36 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:39:36.768753 | orchestrator | 2026-04-01 03:39:36 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:39:36.768837 | orchestrator | 2026-04-01 03:39:36 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:39:39.815753 | orchestrator | 2026-04-01 03:39:39 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:39:39.816552 | orchestrator | 2026-04-01 03:39:39 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:39:39.816705 | orchestrator | 2026-04-01 03:39:39 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:39:42.869190 | orchestrator | 2026-04-01 03:39:42 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:39:42.870558 | orchestrator | 2026-04-01 03:39:42 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:39:42.870599 | orchestrator | 2026-04-01 03:39:42 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:39:45.923435 | orchestrator | 2026-04-01 03:39:45 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:39:45.925361 | orchestrator | 2026-04-01 03:39:45 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:39:45.925429 | orchestrator | 2026-04-01 03:39:45 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:39:48.972207 | orchestrator | 2026-04-01 03:39:48 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:39:48.974532 | orchestrator | 2026-04-01 03:39:48 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:39:48.974644 | orchestrator | 2026-04-01 03:39:48 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:39:52.022073 | orchestrator | 2026-04-01 03:39:52 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:39:52.023955 | orchestrator | 2026-04-01 03:39:52 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:39:52.023992 | orchestrator | 2026-04-01 03:39:52 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:39:55.072819 | orchestrator | 2026-04-01 03:39:55 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:39:55.077191 | orchestrator | 2026-04-01 03:39:55 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:39:55.077878 | orchestrator | 2026-04-01 03:39:55 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:39:58.128888 | orchestrator | 2026-04-01 03:39:58 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:39:58.130662 | orchestrator | 2026-04-01 03:39:58 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:39:58.130696 | orchestrator | 2026-04-01 03:39:58 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:40:01.185205 | orchestrator | 2026-04-01 03:40:01 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:40:01.187148 | orchestrator | 2026-04-01 03:40:01 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:40:01.187222 | orchestrator | 2026-04-01 03:40:01 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:40:04.246809 | orchestrator | 2026-04-01 03:40:04 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:40:04.250203 | orchestrator | 2026-04-01 03:40:04 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:40:04.250273 | orchestrator | 2026-04-01 03:40:04 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:40:07.297826 | orchestrator | 2026-04-01 03:40:07 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:40:07.298882 | orchestrator | 2026-04-01 03:40:07 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:40:07.299192 | orchestrator | 2026-04-01 03:40:07 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:40:10.350823 | orchestrator | 2026-04-01 03:40:10 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:40:10.352218 | orchestrator | 2026-04-01 03:40:10 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:40:10.352257 | orchestrator | 2026-04-01 03:40:10 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:40:13.398551 | orchestrator | 2026-04-01 03:40:13 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:40:13.400401 | orchestrator | 2026-04-01 03:40:13 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:40:13.400438 | orchestrator | 2026-04-01 03:40:13 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:40:16.452398 | orchestrator | 2026-04-01 03:40:16 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:40:16.453454 | orchestrator | 2026-04-01 03:40:16 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:40:16.453503 | orchestrator | 2026-04-01 03:40:16 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:40:19.503745 | orchestrator | 2026-04-01 03:40:19 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:40:19.506402 | orchestrator | 2026-04-01 03:40:19 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:40:19.506475 | orchestrator | 2026-04-01 03:40:19 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:40:22.555779 | orchestrator | 2026-04-01 03:40:22 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:40:22.557628 | orchestrator | 2026-04-01 03:40:22 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:40:22.557688 | orchestrator | 2026-04-01 03:40:22 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:40:25.607135 | orchestrator | 2026-04-01 03:40:25 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:40:25.608612 | orchestrator | 2026-04-01 03:40:25 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:40:25.608658 | orchestrator | 2026-04-01 03:40:25 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:40:28.655373 | orchestrator | 2026-04-01 03:40:28 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:40:28.657402 | orchestrator | 2026-04-01 03:40:28 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:40:28.657489 | orchestrator | 2026-04-01 03:40:28 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:40:31.712925 | orchestrator | 2026-04-01 03:40:31 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:40:31.713001 | orchestrator | 2026-04-01 03:40:31 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:40:31.713009 | orchestrator | 2026-04-01 03:40:31 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:40:34.765870 | orchestrator | 2026-04-01 03:40:34 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:40:34.768793 | orchestrator | 2026-04-01 03:40:34 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:40:34.768841 | orchestrator | 2026-04-01 03:40:34 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:40:37.811546 | orchestrator | 2026-04-01 03:40:37 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:40:37.814386 | orchestrator | 2026-04-01 03:40:37 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:40:37.814468 | orchestrator | 2026-04-01 03:40:37 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:40:40.861694 | orchestrator | 2026-04-01 03:40:40 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:40:40.863996 | orchestrator | 2026-04-01 03:40:40 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:40:40.864072 | orchestrator | 2026-04-01 03:40:40 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:40:43.915243 | orchestrator | 2026-04-01 03:40:43 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:40:43.917962 | orchestrator | 2026-04-01 03:40:43 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:40:43.918071 | orchestrator | 2026-04-01 03:40:43 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:40:46.972867 | orchestrator | 2026-04-01 03:40:46 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:40:46.974677 | orchestrator | 2026-04-01 03:40:46 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:40:46.974736 | orchestrator | 2026-04-01 03:40:46 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:40:50.018171 | orchestrator | 2026-04-01 03:40:50 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:40:50.019328 | orchestrator | 2026-04-01 03:40:50 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:40:50.019378 | orchestrator | 2026-04-01 03:40:50 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:40:53.062246 | orchestrator | 2026-04-01 03:40:53 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:40:53.065786 | orchestrator | 2026-04-01 03:40:53 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:40:53.065888 | orchestrator | 2026-04-01 03:40:53 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:40:56.113373 | orchestrator | 2026-04-01 03:40:56 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:40:56.115504 | orchestrator | 2026-04-01 03:40:56 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:40:56.115573 | orchestrator | 2026-04-01 03:40:56 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:40:59.155545 | orchestrator | 2026-04-01 03:40:59 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:40:59.157422 | orchestrator | 2026-04-01 03:40:59 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:40:59.158075 | orchestrator | 2026-04-01 03:40:59 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:41:02.209602 | orchestrator | 2026-04-01 03:41:02 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:41:02.211123 | orchestrator | 2026-04-01 03:41:02 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:41:02.211295 | orchestrator | 2026-04-01 03:41:02 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:41:05.259381 | orchestrator | 2026-04-01 03:41:05 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:41:05.260812 | orchestrator | 2026-04-01 03:41:05 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:41:05.260879 | orchestrator | 2026-04-01 03:41:05 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:41:08.310466 | orchestrator | 2026-04-01 03:41:08 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:41:08.311414 | orchestrator | 2026-04-01 03:41:08 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:41:08.311959 | orchestrator | 2026-04-01 03:41:08 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:41:11.366534 | orchestrator | 2026-04-01 03:41:11 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:41:11.369865 | orchestrator | 2026-04-01 03:41:11 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:41:11.369931 | orchestrator | 2026-04-01 03:41:11 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:41:14.427547 | orchestrator | 2026-04-01 03:41:14 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:41:14.429073 | orchestrator | 2026-04-01 03:41:14 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:41:14.429164 | orchestrator | 2026-04-01 03:41:14 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:41:17.479684 | orchestrator | 2026-04-01 03:41:17 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:41:17.481391 | orchestrator | 2026-04-01 03:41:17 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:41:17.481431 | orchestrator | 2026-04-01 03:41:17 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:41:20.533114 | orchestrator | 2026-04-01 03:41:20 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:41:20.534405 | orchestrator | 2026-04-01 03:41:20 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:41:20.534441 | orchestrator | 2026-04-01 03:41:20 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:41:23.576108 | orchestrator | 2026-04-01 03:41:23 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:41:23.577702 | orchestrator | 2026-04-01 03:41:23 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:41:23.577750 | orchestrator | 2026-04-01 03:41:23 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:41:26.624838 | orchestrator | 2026-04-01 03:41:26 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:41:26.626682 | orchestrator | 2026-04-01 03:41:26 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:41:26.626766 | orchestrator | 2026-04-01 03:41:26 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:41:29.675946 | orchestrator | 2026-04-01 03:41:29 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:41:29.678153 | orchestrator | 2026-04-01 03:41:29 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:41:29.678214 | orchestrator | 2026-04-01 03:41:29 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:41:32.725461 | orchestrator | 2026-04-01 03:41:32 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:41:32.727882 | orchestrator | 2026-04-01 03:41:32 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:41:32.727958 | orchestrator | 2026-04-01 03:41:32 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:41:35.769889 | orchestrator | 2026-04-01 03:41:35 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:41:35.771831 | orchestrator | 2026-04-01 03:41:35 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:41:35.771887 | orchestrator | 2026-04-01 03:41:35 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:41:38.818081 | orchestrator | 2026-04-01 03:41:38 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:41:38.819996 | orchestrator | 2026-04-01 03:41:38 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:41:38.820060 | orchestrator | 2026-04-01 03:41:38 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:41:41.868621 | orchestrator | 2026-04-01 03:41:41 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:41:41.869610 | orchestrator | 2026-04-01 03:41:41 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:41:41.869653 | orchestrator | 2026-04-01 03:41:41 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:41:44.915504 | orchestrator | 2026-04-01 03:41:44 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:41:44.917297 | orchestrator | 2026-04-01 03:41:44 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:41:44.917370 | orchestrator | 2026-04-01 03:41:44 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:41:47.964963 | orchestrator | 2026-04-01 03:41:47 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:41:47.965720 | orchestrator | 2026-04-01 03:41:47 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:41:47.965837 | orchestrator | 2026-04-01 03:41:47 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:41:51.018708 | orchestrator | 2026-04-01 03:41:51 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:41:51.020321 | orchestrator | 2026-04-01 03:41:51 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:41:51.020359 | orchestrator | 2026-04-01 03:41:51 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:41:54.069574 | orchestrator | 2026-04-01 03:41:54 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:41:54.070840 | orchestrator | 2026-04-01 03:41:54 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:41:54.070888 | orchestrator | 2026-04-01 03:41:54 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:41:57.121055 | orchestrator | 2026-04-01 03:41:57 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:41:57.124039 | orchestrator | 2026-04-01 03:41:57 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:41:57.124398 | orchestrator | 2026-04-01 03:41:57 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:42:00.176119 | orchestrator | 2026-04-01 03:42:00 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:42:00.177915 | orchestrator | 2026-04-01 03:42:00 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:42:00.177959 | orchestrator | 2026-04-01 03:42:00 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:42:03.221845 | orchestrator | 2026-04-01 03:42:03 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:42:03.222807 | orchestrator | 2026-04-01 03:42:03 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:42:03.223147 | orchestrator | 2026-04-01 03:42:03 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:42:06.269127 | orchestrator | 2026-04-01 03:42:06 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:42:06.270508 | orchestrator | 2026-04-01 03:42:06 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:42:06.270642 | orchestrator | 2026-04-01 03:42:06 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:42:09.311128 | orchestrator | 2026-04-01 03:42:09 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:42:09.312942 | orchestrator | 2026-04-01 03:42:09 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:42:09.313447 | orchestrator | 2026-04-01 03:42:09 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:42:12.358356 | orchestrator | 2026-04-01 03:42:12 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:42:12.360487 | orchestrator | 2026-04-01 03:42:12 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:42:12.360551 | orchestrator | 2026-04-01 03:42:12 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:42:15.407085 | orchestrator | 2026-04-01 03:42:15 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:42:15.408738 | orchestrator | 2026-04-01 03:42:15 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:42:15.408829 | orchestrator | 2026-04-01 03:42:15 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:42:18.453905 | orchestrator | 2026-04-01 03:42:18 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:42:18.455193 | orchestrator | 2026-04-01 03:42:18 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:42:18.455265 | orchestrator | 2026-04-01 03:42:18 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:42:21.513455 | orchestrator | 2026-04-01 03:42:21 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:42:21.516313 | orchestrator | 2026-04-01 03:42:21 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:42:21.516539 | orchestrator | 2026-04-01 03:42:21 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:42:24.573993 | orchestrator | 2026-04-01 03:42:24 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:42:24.576259 | orchestrator | 2026-04-01 03:42:24 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:42:24.576492 | orchestrator | 2026-04-01 03:42:24 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:42:27.631303 | orchestrator | 2026-04-01 03:42:27 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:42:27.633864 | orchestrator | 2026-04-01 03:42:27 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:42:27.633927 | orchestrator | 2026-04-01 03:42:27 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:42:30.683871 | orchestrator | 2026-04-01 03:42:30 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:42:30.685463 | orchestrator | 2026-04-01 03:42:30 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:42:30.685515 | orchestrator | 2026-04-01 03:42:30 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:42:33.732766 | orchestrator | 2026-04-01 03:42:33 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:42:33.733952 | orchestrator | 2026-04-01 03:42:33 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:42:33.734010 | orchestrator | 2026-04-01 03:42:33 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:42:36.784603 | orchestrator | 2026-04-01 03:42:36 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:42:36.788691 | orchestrator | 2026-04-01 03:42:36 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:42:36.788765 | orchestrator | 2026-04-01 03:42:36 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:42:39.840089 | orchestrator | 2026-04-01 03:42:39 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:42:39.843009 | orchestrator | 2026-04-01 03:42:39 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:42:39.843171 | orchestrator | 2026-04-01 03:42:39 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:42:42.893784 | orchestrator | 2026-04-01 03:42:42 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:42:42.897006 | orchestrator | 2026-04-01 03:42:42 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:42:42.897087 | orchestrator | 2026-04-01 03:42:42 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:42:45.952738 | orchestrator | 2026-04-01 03:42:45 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:42:45.955050 | orchestrator | 2026-04-01 03:42:45 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:42:45.955128 | orchestrator | 2026-04-01 03:42:45 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:42:49.007835 | orchestrator | 2026-04-01 03:42:49 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:42:49.009625 | orchestrator | 2026-04-01 03:42:49 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:42:49.009766 | orchestrator | 2026-04-01 03:42:49 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:42:52.065169 | orchestrator | 2026-04-01 03:42:52 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:42:52.067975 | orchestrator | 2026-04-01 03:42:52 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:42:52.068048 | orchestrator | 2026-04-01 03:42:52 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:42:55.118346 | orchestrator | 2026-04-01 03:42:55 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:42:55.120231 | orchestrator | 2026-04-01 03:42:55 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:42:55.120313 | orchestrator | 2026-04-01 03:42:55 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:42:58.172155 | orchestrator | 2026-04-01 03:42:58 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:42:58.176828 | orchestrator | 2026-04-01 03:42:58 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:42:58.176914 | orchestrator | 2026-04-01 03:42:58 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:43:01.223421 | orchestrator | 2026-04-01 03:43:01 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:43:01.224777 | orchestrator | 2026-04-01 03:43:01 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:43:01.224804 | orchestrator | 2026-04-01 03:43:01 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:43:04.271674 | orchestrator | 2026-04-01 03:43:04 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:43:04.272899 | orchestrator | 2026-04-01 03:43:04 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:43:04.274219 | orchestrator | 2026-04-01 03:43:04 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:43:07.319316 | orchestrator | 2026-04-01 03:43:07 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:43:07.320896 | orchestrator | 2026-04-01 03:43:07 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:43:07.320963 | orchestrator | 2026-04-01 03:43:07 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:43:10.365048 | orchestrator | 2026-04-01 03:43:10 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:43:10.365881 | orchestrator | 2026-04-01 03:43:10 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:43:10.365932 | orchestrator | 2026-04-01 03:43:10 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:43:13.416703 | orchestrator | 2026-04-01 03:43:13 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:43:13.419517 | orchestrator | 2026-04-01 03:43:13 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:43:13.419599 | orchestrator | 2026-04-01 03:43:13 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:43:16.471362 | orchestrator | 2026-04-01 03:43:16 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:43:16.474397 | orchestrator | 2026-04-01 03:43:16 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:43:16.474518 | orchestrator | 2026-04-01 03:43:16 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:43:19.516964 | orchestrator | 2026-04-01 03:43:19 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:43:19.518973 | orchestrator | 2026-04-01 03:43:19 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:43:19.519033 | orchestrator | 2026-04-01 03:43:19 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:43:22.568548 | orchestrator | 2026-04-01 03:43:22 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:43:22.569955 | orchestrator | 2026-04-01 03:43:22 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:43:22.570220 | orchestrator | 2026-04-01 03:43:22 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:43:25.616666 | orchestrator | 2026-04-01 03:43:25 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:43:25.619015 | orchestrator | 2026-04-01 03:43:25 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:43:25.619077 | orchestrator | 2026-04-01 03:43:25 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:43:28.664810 | orchestrator | 2026-04-01 03:43:28 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:43:28.665294 | orchestrator | 2026-04-01 03:43:28 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:43:28.665744 | orchestrator | 2026-04-01 03:43:28 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:43:31.711533 | orchestrator | 2026-04-01 03:43:31 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:43:31.713544 | orchestrator | 2026-04-01 03:43:31 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:43:31.713606 | orchestrator | 2026-04-01 03:43:31 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:43:34.760571 | orchestrator | 2026-04-01 03:43:34 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:43:34.763910 | orchestrator | 2026-04-01 03:43:34 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:43:34.763970 | orchestrator | 2026-04-01 03:43:34 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:43:37.803396 | orchestrator | 2026-04-01 03:43:37 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:43:37.804897 | orchestrator | 2026-04-01 03:43:37 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:43:37.804915 | orchestrator | 2026-04-01 03:43:37 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:43:40.848017 | orchestrator | 2026-04-01 03:43:40 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:43:40.850336 | orchestrator | 2026-04-01 03:43:40 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:43:40.850374 | orchestrator | 2026-04-01 03:43:40 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:43:43.901092 | orchestrator | 2026-04-01 03:43:43 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:43:43.903581 | orchestrator | 2026-04-01 03:43:43 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:43:43.903705 | orchestrator | 2026-04-01 03:43:43 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:43:46.952264 | orchestrator | 2026-04-01 03:43:46 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:43:46.953956 | orchestrator | 2026-04-01 03:43:46 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:43:46.954114 | orchestrator | 2026-04-01 03:43:46 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:43:49.996063 | orchestrator | 2026-04-01 03:43:49 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:43:49.998385 | orchestrator | 2026-04-01 03:43:49 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:43:49.998447 | orchestrator | 2026-04-01 03:43:49 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:43:53.046825 | orchestrator | 2026-04-01 03:43:53 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:43:53.048058 | orchestrator | 2026-04-01 03:43:53 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:43:53.048109 | orchestrator | 2026-04-01 03:43:53 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:43:56.094886 | orchestrator | 2026-04-01 03:43:56 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:43:56.097078 | orchestrator | 2026-04-01 03:43:56 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:43:56.097166 | orchestrator | 2026-04-01 03:43:56 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:43:59.141780 | orchestrator | 2026-04-01 03:43:59 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:43:59.143093 | orchestrator | 2026-04-01 03:43:59 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:43:59.143218 | orchestrator | 2026-04-01 03:43:59 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:44:02.188421 | orchestrator | 2026-04-01 03:44:02 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:44:02.190285 | orchestrator | 2026-04-01 03:44:02 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:44:02.190317 | orchestrator | 2026-04-01 03:44:02 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:44:05.236195 | orchestrator | 2026-04-01 03:44:05 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:44:05.238335 | orchestrator | 2026-04-01 03:44:05 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:44:05.238437 | orchestrator | 2026-04-01 03:44:05 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:44:08.291172 | orchestrator | 2026-04-01 03:44:08 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:44:08.293147 | orchestrator | 2026-04-01 03:44:08 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:44:08.293206 | orchestrator | 2026-04-01 03:44:08 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:44:11.333841 | orchestrator | 2026-04-01 03:44:11 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:44:11.335296 | orchestrator | 2026-04-01 03:44:11 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:44:11.335357 | orchestrator | 2026-04-01 03:44:11 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:44:14.381705 | orchestrator | 2026-04-01 03:44:14 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:44:14.383340 | orchestrator | 2026-04-01 03:44:14 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:44:14.383410 | orchestrator | 2026-04-01 03:44:14 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:44:17.429203 | orchestrator | 2026-04-01 03:44:17 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:44:17.431414 | orchestrator | 2026-04-01 03:44:17 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:44:17.431462 | orchestrator | 2026-04-01 03:44:17 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:44:20.484549 | orchestrator | 2026-04-01 03:44:20 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:44:20.486001 | orchestrator | 2026-04-01 03:44:20 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:44:20.486173 | orchestrator | 2026-04-01 03:44:20 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:44:23.536623 | orchestrator | 2026-04-01 03:44:23 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:44:23.539253 | orchestrator | 2026-04-01 03:44:23 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:44:23.539319 | orchestrator | 2026-04-01 03:44:23 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:44:26.584475 | orchestrator | 2026-04-01 03:44:26 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:44:26.586623 | orchestrator | 2026-04-01 03:44:26 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:44:26.586674 | orchestrator | 2026-04-01 03:44:26 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:44:29.628751 | orchestrator | 2026-04-01 03:44:29 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:44:29.631484 | orchestrator | 2026-04-01 03:44:29 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:44:29.631579 | orchestrator | 2026-04-01 03:44:29 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:44:32.674773 | orchestrator | 2026-04-01 03:44:32 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:44:32.677698 | orchestrator | 2026-04-01 03:44:32 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:44:32.677883 | orchestrator | 2026-04-01 03:44:32 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:44:35.719600 | orchestrator | 2026-04-01 03:44:35 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:44:35.722949 | orchestrator | 2026-04-01 03:44:35 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:44:35.723042 | orchestrator | 2026-04-01 03:44:35 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:44:38.772175 | orchestrator | 2026-04-01 03:44:38 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:44:38.773318 | orchestrator | 2026-04-01 03:44:38 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:44:38.773380 | orchestrator | 2026-04-01 03:44:38 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:44:41.826919 | orchestrator | 2026-04-01 03:44:41 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:44:41.828940 | orchestrator | 2026-04-01 03:44:41 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:44:41.829024 | orchestrator | 2026-04-01 03:44:41 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:44:44.870261 | orchestrator | 2026-04-01 03:44:44 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:44:44.871772 | orchestrator | 2026-04-01 03:44:44 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:44:44.871811 | orchestrator | 2026-04-01 03:44:44 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:44:47.922624 | orchestrator | 2026-04-01 03:44:47 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:44:47.922883 | orchestrator | 2026-04-01 03:44:47 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:44:47.922921 | orchestrator | 2026-04-01 03:44:47 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:44:50.973472 | orchestrator | 2026-04-01 03:44:50 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:44:50.976005 | orchestrator | 2026-04-01 03:44:50 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:44:50.976211 | orchestrator | 2026-04-01 03:44:50 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:44:54.027837 | orchestrator | 2026-04-01 03:44:54 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:44:54.029350 | orchestrator | 2026-04-01 03:44:54 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:44:54.029479 | orchestrator | 2026-04-01 03:44:54 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:44:57.075613 | orchestrator | 2026-04-01 03:44:57 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:44:57.077053 | orchestrator | 2026-04-01 03:44:57 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:44:57.077146 | orchestrator | 2026-04-01 03:44:57 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:45:00.123657 | orchestrator | 2026-04-01 03:45:00 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:45:00.125396 | orchestrator | 2026-04-01 03:45:00 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:45:00.125583 | orchestrator | 2026-04-01 03:45:00 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:45:03.177743 | orchestrator | 2026-04-01 03:45:03 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:45:03.179773 | orchestrator | 2026-04-01 03:45:03 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:45:03.179826 | orchestrator | 2026-04-01 03:45:03 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:45:06.225996 | orchestrator | 2026-04-01 03:45:06 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:45:06.228339 | orchestrator | 2026-04-01 03:45:06 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:45:06.228383 | orchestrator | 2026-04-01 03:45:06 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:45:09.276367 | orchestrator | 2026-04-01 03:45:09 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:45:09.278839 | orchestrator | 2026-04-01 03:45:09 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:45:09.279069 | orchestrator | 2026-04-01 03:45:09 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:45:12.329085 | orchestrator | 2026-04-01 03:45:12 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:45:12.330661 | orchestrator | 2026-04-01 03:45:12 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:45:12.330856 | orchestrator | 2026-04-01 03:45:12 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:45:15.375210 | orchestrator | 2026-04-01 03:45:15 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:45:15.378378 | orchestrator | 2026-04-01 03:45:15 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:45:15.379454 | orchestrator | 2026-04-01 03:45:15 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:45:18.431349 | orchestrator | 2026-04-01 03:45:18 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:45:18.433237 | orchestrator | 2026-04-01 03:45:18 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:45:18.433330 | orchestrator | 2026-04-01 03:45:18 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:45:21.476113 | orchestrator | 2026-04-01 03:45:21 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:45:21.478119 | orchestrator | 2026-04-01 03:45:21 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:45:21.478195 | orchestrator | 2026-04-01 03:45:21 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:45:24.523636 | orchestrator | 2026-04-01 03:45:24 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:45:24.525417 | orchestrator | 2026-04-01 03:45:24 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:45:24.525479 | orchestrator | 2026-04-01 03:45:24 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:45:27.572868 | orchestrator | 2026-04-01 03:45:27 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:45:27.574286 | orchestrator | 2026-04-01 03:45:27 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:45:27.574349 | orchestrator | 2026-04-01 03:45:27 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:45:30.619555 | orchestrator | 2026-04-01 03:45:30 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:45:30.621968 | orchestrator | 2026-04-01 03:45:30 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:45:30.622129 | orchestrator | 2026-04-01 03:45:30 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:45:33.675849 | orchestrator | 2026-04-01 03:45:33 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:45:33.678155 | orchestrator | 2026-04-01 03:45:33 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:45:33.678288 | orchestrator | 2026-04-01 03:45:33 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:45:36.718668 | orchestrator | 2026-04-01 03:45:36 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:45:36.718754 | orchestrator | 2026-04-01 03:45:36 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:45:36.719610 | orchestrator | 2026-04-01 03:45:36 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:45:39.766197 | orchestrator | 2026-04-01 03:45:39 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:45:39.767885 | orchestrator | 2026-04-01 03:45:39 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:45:39.767958 | orchestrator | 2026-04-01 03:45:39 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:45:42.819722 | orchestrator | 2026-04-01 03:45:42 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:45:42.822313 | orchestrator | 2026-04-01 03:45:42 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:45:42.822365 | orchestrator | 2026-04-01 03:45:42 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:45:45.873476 | orchestrator | 2026-04-01 03:45:45 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:45:45.875609 | orchestrator | 2026-04-01 03:45:45 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:45:45.875651 | orchestrator | 2026-04-01 03:45:45 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:45:48.922480 | orchestrator | 2026-04-01 03:45:48 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:45:48.925474 | orchestrator | 2026-04-01 03:45:48 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:45:48.925542 | orchestrator | 2026-04-01 03:45:48 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:45:51.973952 | orchestrator | 2026-04-01 03:45:51 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:45:51.976114 | orchestrator | 2026-04-01 03:45:51 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:45:51.976204 | orchestrator | 2026-04-01 03:45:51 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:45:55.014381 | orchestrator | 2026-04-01 03:45:55 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:45:55.015226 | orchestrator | 2026-04-01 03:45:55 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:45:55.015277 | orchestrator | 2026-04-01 03:45:55 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:45:58.067672 | orchestrator | 2026-04-01 03:45:58 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:45:58.069067 | orchestrator | 2026-04-01 03:45:58 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:45:58.069276 | orchestrator | 2026-04-01 03:45:58 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:46:01.116290 | orchestrator | 2026-04-01 03:46:01 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:46:01.119249 | orchestrator | 2026-04-01 03:46:01 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:46:01.119295 | orchestrator | 2026-04-01 03:46:01 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:46:04.164523 | orchestrator | 2026-04-01 03:46:04 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:46:04.166306 | orchestrator | 2026-04-01 03:46:04 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:46:04.166487 | orchestrator | 2026-04-01 03:46:04 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:46:07.214109 | orchestrator | 2026-04-01 03:46:07 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:46:07.214661 | orchestrator | 2026-04-01 03:46:07 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:46:07.215236 | orchestrator | 2026-04-01 03:46:07 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:46:10.266069 | orchestrator | 2026-04-01 03:46:10 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:46:10.267004 | orchestrator | 2026-04-01 03:46:10 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:46:10.267081 | orchestrator | 2026-04-01 03:46:10 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:46:13.320302 | orchestrator | 2026-04-01 03:46:13 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:46:13.322183 | orchestrator | 2026-04-01 03:46:13 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:46:13.322557 | orchestrator | 2026-04-01 03:46:13 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:46:16.366189 | orchestrator | 2026-04-01 03:46:16 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:46:16.368762 | orchestrator | 2026-04-01 03:46:16 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:46:16.368809 | orchestrator | 2026-04-01 03:46:16 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:46:19.416891 | orchestrator | 2026-04-01 03:46:19 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:46:19.418812 | orchestrator | 2026-04-01 03:46:19 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:46:19.418842 | orchestrator | 2026-04-01 03:46:19 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:46:22.473291 | orchestrator | 2026-04-01 03:46:22 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:46:22.474505 | orchestrator | 2026-04-01 03:46:22 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:46:22.474556 | orchestrator | 2026-04-01 03:46:22 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:46:25.528408 | orchestrator | 2026-04-01 03:46:25 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:46:25.529039 | orchestrator | 2026-04-01 03:46:25 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:46:25.529109 | orchestrator | 2026-04-01 03:46:25 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:46:28.582666 | orchestrator | 2026-04-01 03:46:28 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:46:28.583719 | orchestrator | 2026-04-01 03:46:28 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:46:28.583757 | orchestrator | 2026-04-01 03:46:28 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:46:31.629783 | orchestrator | 2026-04-01 03:46:31 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:46:31.633294 | orchestrator | 2026-04-01 03:46:31 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:46:31.633375 | orchestrator | 2026-04-01 03:46:31 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:46:34.676405 | orchestrator | 2026-04-01 03:46:34 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:46:34.677862 | orchestrator | 2026-04-01 03:46:34 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:46:34.677907 | orchestrator | 2026-04-01 03:46:34 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:46:37.723782 | orchestrator | 2026-04-01 03:46:37 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:46:37.725541 | orchestrator | 2026-04-01 03:46:37 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:46:37.725588 | orchestrator | 2026-04-01 03:46:37 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:46:40.771901 | orchestrator | 2026-04-01 03:46:40 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:46:40.773701 | orchestrator | 2026-04-01 03:46:40 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:46:40.773919 | orchestrator | 2026-04-01 03:46:40 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:46:43.819500 | orchestrator | 2026-04-01 03:46:43 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:46:43.821115 | orchestrator | 2026-04-01 03:46:43 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:46:43.821203 | orchestrator | 2026-04-01 03:46:43 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:46:46.867723 | orchestrator | 2026-04-01 03:46:46 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:46:46.868936 | orchestrator | 2026-04-01 03:46:46 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:46:46.869000 | orchestrator | 2026-04-01 03:46:46 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:46:49.916125 | orchestrator | 2026-04-01 03:46:49 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:46:49.918472 | orchestrator | 2026-04-01 03:46:49 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:46:49.918530 | orchestrator | 2026-04-01 03:46:49 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:46:52.966377 | orchestrator | 2026-04-01 03:46:52 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:46:52.968094 | orchestrator | 2026-04-01 03:46:52 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:46:52.968775 | orchestrator | 2026-04-01 03:46:52 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:46:56.013039 | orchestrator | 2026-04-01 03:46:56 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:46:56.015327 | orchestrator | 2026-04-01 03:46:56 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:46:56.015494 | orchestrator | 2026-04-01 03:46:56 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:46:59.067350 | orchestrator | 2026-04-01 03:46:59 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:46:59.069565 | orchestrator | 2026-04-01 03:46:59 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:46:59.069624 | orchestrator | 2026-04-01 03:46:59 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:47:02.116563 | orchestrator | 2026-04-01 03:47:02 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:47:02.117749 | orchestrator | 2026-04-01 03:47:02 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:47:02.117808 | orchestrator | 2026-04-01 03:47:02 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:47:05.162437 | orchestrator | 2026-04-01 03:47:05 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:47:05.164701 | orchestrator | 2026-04-01 03:47:05 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:47:05.164827 | orchestrator | 2026-04-01 03:47:05 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:47:08.212697 | orchestrator | 2026-04-01 03:47:08 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:47:08.215003 | orchestrator | 2026-04-01 03:47:08 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:47:08.215072 | orchestrator | 2026-04-01 03:47:08 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:47:11.266379 | orchestrator | 2026-04-01 03:47:11 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:47:11.268043 | orchestrator | 2026-04-01 03:47:11 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:47:11.268121 | orchestrator | 2026-04-01 03:47:11 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:47:14.316237 | orchestrator | 2026-04-01 03:47:14 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:47:14.318344 | orchestrator | 2026-04-01 03:47:14 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:47:14.318374 | orchestrator | 2026-04-01 03:47:14 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:47:17.373042 | orchestrator | 2026-04-01 03:47:17 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:47:17.374633 | orchestrator | 2026-04-01 03:47:17 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:47:17.374701 | orchestrator | 2026-04-01 03:47:17 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:47:20.422719 | orchestrator | 2026-04-01 03:47:20 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:47:20.425596 | orchestrator | 2026-04-01 03:47:20 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:47:20.426401 | orchestrator | 2026-04-01 03:47:20 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:47:23.479088 | orchestrator | 2026-04-01 03:47:23 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:47:23.480497 | orchestrator | 2026-04-01 03:47:23 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:47:23.480583 | orchestrator | 2026-04-01 03:47:23 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:47:26.529335 | orchestrator | 2026-04-01 03:47:26 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:47:26.531221 | orchestrator | 2026-04-01 03:47:26 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:47:26.531261 | orchestrator | 2026-04-01 03:47:26 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:47:29.574710 | orchestrator | 2026-04-01 03:47:29 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:47:29.576171 | orchestrator | 2026-04-01 03:47:29 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:47:29.576235 | orchestrator | 2026-04-01 03:47:29 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:47:32.619114 | orchestrator | 2026-04-01 03:47:32 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:47:32.620137 | orchestrator | 2026-04-01 03:47:32 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:47:32.620163 | orchestrator | 2026-04-01 03:47:32 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:47:35.669683 | orchestrator | 2026-04-01 03:47:35 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:47:35.671860 | orchestrator | 2026-04-01 03:47:35 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:47:35.671892 | orchestrator | 2026-04-01 03:47:35 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:47:38.725579 | orchestrator | 2026-04-01 03:47:38 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:47:38.728481 | orchestrator | 2026-04-01 03:47:38 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:47:38.728555 | orchestrator | 2026-04-01 03:47:38 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:47:41.778162 | orchestrator | 2026-04-01 03:47:41 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:47:41.781099 | orchestrator | 2026-04-01 03:47:41 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:47:41.781144 | orchestrator | 2026-04-01 03:47:41 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:47:44.834670 | orchestrator | 2026-04-01 03:47:44 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:47:44.838683 | orchestrator | 2026-04-01 03:47:44 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:47:44.838766 | orchestrator | 2026-04-01 03:47:44 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:47:47.893275 | orchestrator | 2026-04-01 03:47:47 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:47:47.894825 | orchestrator | 2026-04-01 03:47:47 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:47:47.894920 | orchestrator | 2026-04-01 03:47:47 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:47:50.942874 | orchestrator | 2026-04-01 03:47:50 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:47:50.947296 | orchestrator | 2026-04-01 03:47:50 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:47:50.947386 | orchestrator | 2026-04-01 03:47:50 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:47:53.998110 | orchestrator | 2026-04-01 03:47:53 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:47:53.999332 | orchestrator | 2026-04-01 03:47:53 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:47:53.999372 | orchestrator | 2026-04-01 03:47:53 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:47:57.039366 | orchestrator | 2026-04-01 03:47:57 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:47:57.040514 | orchestrator | 2026-04-01 03:47:57 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:47:57.040559 | orchestrator | 2026-04-01 03:47:57 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:48:00.084085 | orchestrator | 2026-04-01 03:48:00 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:48:00.085975 | orchestrator | 2026-04-01 03:48:00 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:48:00.086062 | orchestrator | 2026-04-01 03:48:00 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:48:03.127332 | orchestrator | 2026-04-01 03:48:03 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:48:03.129765 | orchestrator | 2026-04-01 03:48:03 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:48:03.129816 | orchestrator | 2026-04-01 03:48:03 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:48:06.173208 | orchestrator | 2026-04-01 03:48:06 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:48:06.173362 | orchestrator | 2026-04-01 03:48:06 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:48:06.173377 | orchestrator | 2026-04-01 03:48:06 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:48:09.222968 | orchestrator | 2026-04-01 03:48:09 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:48:09.224607 | orchestrator | 2026-04-01 03:48:09 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:48:09.224752 | orchestrator | 2026-04-01 03:48:09 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:48:12.267921 | orchestrator | 2026-04-01 03:48:12 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:48:12.269900 | orchestrator | 2026-04-01 03:48:12 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:48:12.270012 | orchestrator | 2026-04-01 03:48:12 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:48:15.315618 | orchestrator | 2026-04-01 03:48:15 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:48:15.316731 | orchestrator | 2026-04-01 03:48:15 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:48:15.316799 | orchestrator | 2026-04-01 03:48:15 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:48:18.364282 | orchestrator | 2026-04-01 03:48:18 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:48:18.366763 | orchestrator | 2026-04-01 03:48:18 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:48:18.366930 | orchestrator | 2026-04-01 03:48:18 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:48:21.420893 | orchestrator | 2026-04-01 03:48:21 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:48:21.422919 | orchestrator | 2026-04-01 03:48:21 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:48:21.422986 | orchestrator | 2026-04-01 03:48:21 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:48:24.470347 | orchestrator | 2026-04-01 03:48:24 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:48:24.471670 | orchestrator | 2026-04-01 03:48:24 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:48:24.471730 | orchestrator | 2026-04-01 03:48:24 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:48:27.517613 | orchestrator | 2026-04-01 03:48:27 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:48:27.519408 | orchestrator | 2026-04-01 03:48:27 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:48:27.519487 | orchestrator | 2026-04-01 03:48:27 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:48:30.561200 | orchestrator | 2026-04-01 03:48:30 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:48:30.562220 | orchestrator | 2026-04-01 03:48:30 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:48:30.562258 | orchestrator | 2026-04-01 03:48:30 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:48:33.610599 | orchestrator | 2026-04-01 03:48:33 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:48:33.612524 | orchestrator | 2026-04-01 03:48:33 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:48:33.612618 | orchestrator | 2026-04-01 03:48:33 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:48:36.653385 | orchestrator | 2026-04-01 03:48:36 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:48:36.655412 | orchestrator | 2026-04-01 03:48:36 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:48:36.655479 | orchestrator | 2026-04-01 03:48:36 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:48:39.700373 | orchestrator | 2026-04-01 03:48:39 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:48:39.702449 | orchestrator | 2026-04-01 03:48:39 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:48:39.702487 | orchestrator | 2026-04-01 03:48:39 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:48:42.752646 | orchestrator | 2026-04-01 03:48:42 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:48:42.754424 | orchestrator | 2026-04-01 03:48:42 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:48:42.754478 | orchestrator | 2026-04-01 03:48:42 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:48:45.804411 | orchestrator | 2026-04-01 03:48:45 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:48:45.805818 | orchestrator | 2026-04-01 03:48:45 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:48:45.806284 | orchestrator | 2026-04-01 03:48:45 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:48:48.860715 | orchestrator | 2026-04-01 03:48:48 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:48:48.863376 | orchestrator | 2026-04-01 03:48:48 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:48:48.863475 | orchestrator | 2026-04-01 03:48:48 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:48:51.908832 | orchestrator | 2026-04-01 03:48:51 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:48:51.910849 | orchestrator | 2026-04-01 03:48:51 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:48:51.910914 | orchestrator | 2026-04-01 03:48:51 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:48:54.960244 | orchestrator | 2026-04-01 03:48:54 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:48:54.961011 | orchestrator | 2026-04-01 03:48:54 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:48:54.961400 | orchestrator | 2026-04-01 03:48:54 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:48:58.010921 | orchestrator | 2026-04-01 03:48:58 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:48:58.012420 | orchestrator | 2026-04-01 03:48:58 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:48:58.012476 | orchestrator | 2026-04-01 03:48:58 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:49:01.051419 | orchestrator | 2026-04-01 03:49:01 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:49:01.052466 | orchestrator | 2026-04-01 03:49:01 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:49:01.052553 | orchestrator | 2026-04-01 03:49:01 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:49:04.104425 | orchestrator | 2026-04-01 03:49:04 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:49:04.105613 | orchestrator | 2026-04-01 03:49:04 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:49:04.105630 | orchestrator | 2026-04-01 03:49:04 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:49:07.149345 | orchestrator | 2026-04-01 03:49:07 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:49:07.150419 | orchestrator | 2026-04-01 03:49:07 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:49:07.150672 | orchestrator | 2026-04-01 03:49:07 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:49:10.199660 | orchestrator | 2026-04-01 03:49:10 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:49:10.201320 | orchestrator | 2026-04-01 03:49:10 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:49:10.201369 | orchestrator | 2026-04-01 03:49:10 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:49:13.248885 | orchestrator | 2026-04-01 03:49:13 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:49:13.251644 | orchestrator | 2026-04-01 03:49:13 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:49:13.251705 | orchestrator | 2026-04-01 03:49:13 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:49:16.296404 | orchestrator | 2026-04-01 03:49:16 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:49:16.298653 | orchestrator | 2026-04-01 03:49:16 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:49:16.298709 | orchestrator | 2026-04-01 03:49:16 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:49:19.344897 | orchestrator | 2026-04-01 03:49:19 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:49:19.346578 | orchestrator | 2026-04-01 03:49:19 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:49:19.346646 | orchestrator | 2026-04-01 03:49:19 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:49:22.392787 | orchestrator | 2026-04-01 03:49:22 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:49:22.395392 | orchestrator | 2026-04-01 03:49:22 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:49:22.395456 | orchestrator | 2026-04-01 03:49:22 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:49:25.445097 | orchestrator | 2026-04-01 03:49:25 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:49:25.447116 | orchestrator | 2026-04-01 03:49:25 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:49:25.447167 | orchestrator | 2026-04-01 03:49:25 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:49:28.505002 | orchestrator | 2026-04-01 03:49:28 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:49:28.507202 | orchestrator | 2026-04-01 03:49:28 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:49:28.507257 | orchestrator | 2026-04-01 03:49:28 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:49:31.552882 | orchestrator | 2026-04-01 03:49:31 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:49:31.554327 | orchestrator | 2026-04-01 03:49:31 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:49:31.554369 | orchestrator | 2026-04-01 03:49:31 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:49:34.611032 | orchestrator | 2026-04-01 03:49:34 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:49:34.613253 | orchestrator | 2026-04-01 03:49:34 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:49:34.613322 | orchestrator | 2026-04-01 03:49:34 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:49:37.662652 | orchestrator | 2026-04-01 03:49:37 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:49:37.666996 | orchestrator | 2026-04-01 03:49:37 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:49:37.667480 | orchestrator | 2026-04-01 03:49:37 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:49:40.715962 | orchestrator | 2026-04-01 03:49:40 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:49:40.718859 | orchestrator | 2026-04-01 03:49:40 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:49:40.718954 | orchestrator | 2026-04-01 03:49:40 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:49:43.764725 | orchestrator | 2026-04-01 03:49:43 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:49:43.765440 | orchestrator | 2026-04-01 03:49:43 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:49:43.765487 | orchestrator | 2026-04-01 03:49:43 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:49:46.823436 | orchestrator | 2026-04-01 03:49:46 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:49:46.825435 | orchestrator | 2026-04-01 03:49:46 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:49:46.825517 | orchestrator | 2026-04-01 03:49:46 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:49:49.882085 | orchestrator | 2026-04-01 03:49:49 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:49:49.882914 | orchestrator | 2026-04-01 03:49:49 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:49:49.882961 | orchestrator | 2026-04-01 03:49:49 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:49:52.932190 | orchestrator | 2026-04-01 03:49:52 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:49:52.934164 | orchestrator | 2026-04-01 03:49:52 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:49:52.934303 | orchestrator | 2026-04-01 03:49:52 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:49:55.982463 | orchestrator | 2026-04-01 03:49:55 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:49:55.985119 | orchestrator | 2026-04-01 03:49:55 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:49:55.985205 | orchestrator | 2026-04-01 03:49:55 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:49:59.030971 | orchestrator | 2026-04-01 03:49:59 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:49:59.031803 | orchestrator | 2026-04-01 03:49:59 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:49:59.031843 | orchestrator | 2026-04-01 03:49:59 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:50:02.078226 | orchestrator | 2026-04-01 03:50:02 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:50:02.079219 | orchestrator | 2026-04-01 03:50:02 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:50:02.079296 | orchestrator | 2026-04-01 03:50:02 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:50:05.135017 | orchestrator | 2026-04-01 03:50:05 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:50:05.138863 | orchestrator | 2026-04-01 03:50:05 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:50:05.138943 | orchestrator | 2026-04-01 03:50:05 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:50:08.182797 | orchestrator | 2026-04-01 03:50:08 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:50:08.184151 | orchestrator | 2026-04-01 03:50:08 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:50:08.184200 | orchestrator | 2026-04-01 03:50:08 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:50:11.240957 | orchestrator | 2026-04-01 03:50:11 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:50:11.242918 | orchestrator | 2026-04-01 03:50:11 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:50:11.242979 | orchestrator | 2026-04-01 03:50:11 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:50:14.292499 | orchestrator | 2026-04-01 03:50:14 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:50:14.294629 | orchestrator | 2026-04-01 03:50:14 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:50:14.294780 | orchestrator | 2026-04-01 03:50:14 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:50:17.343472 | orchestrator | 2026-04-01 03:50:17 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:50:17.345109 | orchestrator | 2026-04-01 03:50:17 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:50:17.345187 | orchestrator | 2026-04-01 03:50:17 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:50:20.395111 | orchestrator | 2026-04-01 03:50:20 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:50:20.396622 | orchestrator | 2026-04-01 03:50:20 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:50:20.396694 | orchestrator | 2026-04-01 03:50:20 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:50:23.444072 | orchestrator | 2026-04-01 03:50:23 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:50:23.446789 | orchestrator | 2026-04-01 03:50:23 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:50:23.446877 | orchestrator | 2026-04-01 03:50:23 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:50:26.490866 | orchestrator | 2026-04-01 03:50:26 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:50:26.493288 | orchestrator | 2026-04-01 03:50:26 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:50:26.493342 | orchestrator | 2026-04-01 03:50:26 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:50:29.542567 | orchestrator | 2026-04-01 03:50:29 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:50:29.544793 | orchestrator | 2026-04-01 03:50:29 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:50:29.544914 | orchestrator | 2026-04-01 03:50:29 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:50:32.592347 | orchestrator | 2026-04-01 03:50:32 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:50:32.594506 | orchestrator | 2026-04-01 03:50:32 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:50:32.594682 | orchestrator | 2026-04-01 03:50:32 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:50:35.635804 | orchestrator | 2026-04-01 03:50:35 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:50:35.636723 | orchestrator | 2026-04-01 03:50:35 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:50:35.636753 | orchestrator | 2026-04-01 03:50:35 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:50:38.690423 | orchestrator | 2026-04-01 03:50:38 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:50:38.691503 | orchestrator | 2026-04-01 03:50:38 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:50:38.691544 | orchestrator | 2026-04-01 03:50:38 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:50:41.738314 | orchestrator | 2026-04-01 03:50:41 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:50:41.740363 | orchestrator | 2026-04-01 03:50:41 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:50:41.740429 | orchestrator | 2026-04-01 03:50:41 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:50:44.788589 | orchestrator | 2026-04-01 03:50:44 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:50:44.790615 | orchestrator | 2026-04-01 03:50:44 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:50:44.790655 | orchestrator | 2026-04-01 03:50:44 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:50:47.839449 | orchestrator | 2026-04-01 03:50:47 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:50:47.841029 | orchestrator | 2026-04-01 03:50:47 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:50:47.841103 | orchestrator | 2026-04-01 03:50:47 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:50:50.889632 | orchestrator | 2026-04-01 03:50:50 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:50:50.890979 | orchestrator | 2026-04-01 03:50:50 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:50:50.891016 | orchestrator | 2026-04-01 03:50:50 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:50:53.942531 | orchestrator | 2026-04-01 03:50:53 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:50:53.944092 | orchestrator | 2026-04-01 03:50:53 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:50:53.944254 | orchestrator | 2026-04-01 03:50:53 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:50:56.990678 | orchestrator | 2026-04-01 03:50:56 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:50:56.992839 | orchestrator | 2026-04-01 03:50:56 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:50:56.993427 | orchestrator | 2026-04-01 03:50:56 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:51:00.031433 | orchestrator | 2026-04-01 03:51:00 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:51:00.033367 | orchestrator | 2026-04-01 03:51:00 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:51:00.033438 | orchestrator | 2026-04-01 03:51:00 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:51:03.077995 | orchestrator | 2026-04-01 03:51:03 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:51:03.079070 | orchestrator | 2026-04-01 03:51:03 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:51:03.079116 | orchestrator | 2026-04-01 03:51:03 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:51:06.126527 | orchestrator | 2026-04-01 03:51:06 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:51:06.127417 | orchestrator | 2026-04-01 03:51:06 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:51:06.127725 | orchestrator | 2026-04-01 03:51:06 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:51:09.177151 | orchestrator | 2026-04-01 03:51:09 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:51:09.179041 | orchestrator | 2026-04-01 03:51:09 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:51:09.179172 | orchestrator | 2026-04-01 03:51:09 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:51:12.236856 | orchestrator | 2026-04-01 03:51:12 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:51:12.240456 | orchestrator | 2026-04-01 03:51:12 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:51:12.241128 | orchestrator | 2026-04-01 03:51:12 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:51:15.289950 | orchestrator | 2026-04-01 03:51:15 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:51:15.293861 | orchestrator | 2026-04-01 03:51:15 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:51:15.293961 | orchestrator | 2026-04-01 03:51:15 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:51:18.346503 | orchestrator | 2026-04-01 03:51:18 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:51:18.348633 | orchestrator | 2026-04-01 03:51:18 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:51:18.348711 | orchestrator | 2026-04-01 03:51:18 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:51:21.401468 | orchestrator | 2026-04-01 03:51:21 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:51:21.403065 | orchestrator | 2026-04-01 03:51:21 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:51:21.403217 | orchestrator | 2026-04-01 03:51:21 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:51:24.448875 | orchestrator | 2026-04-01 03:51:24 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:51:24.450417 | orchestrator | 2026-04-01 03:51:24 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:51:24.450504 | orchestrator | 2026-04-01 03:51:24 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:51:27.501388 | orchestrator | 2026-04-01 03:51:27 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:51:27.503614 | orchestrator | 2026-04-01 03:51:27 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:51:27.503693 | orchestrator | 2026-04-01 03:51:27 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:51:30.549831 | orchestrator | 2026-04-01 03:51:30 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:51:30.552250 | orchestrator | 2026-04-01 03:51:30 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:51:30.552321 | orchestrator | 2026-04-01 03:51:30 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:51:33.594730 | orchestrator | 2026-04-01 03:51:33 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:51:33.596768 | orchestrator | 2026-04-01 03:51:33 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:51:33.596812 | orchestrator | 2026-04-01 03:51:33 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:51:36.640572 | orchestrator | 2026-04-01 03:51:36 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:51:36.642627 | orchestrator | 2026-04-01 03:51:36 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:51:36.642676 | orchestrator | 2026-04-01 03:51:36 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:51:39.694761 | orchestrator | 2026-04-01 03:51:39 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:51:39.696171 | orchestrator | 2026-04-01 03:51:39 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:51:39.696231 | orchestrator | 2026-04-01 03:51:39 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:51:42.744411 | orchestrator | 2026-04-01 03:51:42 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:51:42.746349 | orchestrator | 2026-04-01 03:51:42 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:51:42.746587 | orchestrator | 2026-04-01 03:51:42 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:51:45.796629 | orchestrator | 2026-04-01 03:51:45 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:51:45.798912 | orchestrator | 2026-04-01 03:51:45 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:51:45.799109 | orchestrator | 2026-04-01 03:51:45 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:51:48.844568 | orchestrator | 2026-04-01 03:51:48 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:51:48.845540 | orchestrator | 2026-04-01 03:51:48 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:51:48.845571 | orchestrator | 2026-04-01 03:51:48 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:51:51.896044 | orchestrator | 2026-04-01 03:51:51 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:51:51.897672 | orchestrator | 2026-04-01 03:51:51 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:51:51.897711 | orchestrator | 2026-04-01 03:51:51 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:51:54.943297 | orchestrator | 2026-04-01 03:51:54 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:51:54.945881 | orchestrator | 2026-04-01 03:51:54 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:51:54.945918 | orchestrator | 2026-04-01 03:51:54 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:51:58.002870 | orchestrator | 2026-04-01 03:51:58 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:51:58.004154 | orchestrator | 2026-04-01 03:51:58 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:51:58.004262 | orchestrator | 2026-04-01 03:51:58 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:52:01.045380 | orchestrator | 2026-04-01 03:52:01 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:52:01.047204 | orchestrator | 2026-04-01 03:52:01 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:52:01.047323 | orchestrator | 2026-04-01 03:52:01 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:52:04.094491 | orchestrator | 2026-04-01 03:52:04 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:52:04.095735 | orchestrator | 2026-04-01 03:52:04 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:52:04.095784 | orchestrator | 2026-04-01 03:52:04 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:52:07.137727 | orchestrator | 2026-04-01 03:52:07 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:52:07.138281 | orchestrator | 2026-04-01 03:52:07 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:52:07.138304 | orchestrator | 2026-04-01 03:52:07 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:52:10.187211 | orchestrator | 2026-04-01 03:52:10 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:52:10.190254 | orchestrator | 2026-04-01 03:52:10 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:52:10.190335 | orchestrator | 2026-04-01 03:52:10 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:52:13.241046 | orchestrator | 2026-04-01 03:52:13 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:52:13.242281 | orchestrator | 2026-04-01 03:52:13 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:52:13.242364 | orchestrator | 2026-04-01 03:52:13 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:52:16.285557 | orchestrator | 2026-04-01 03:52:16 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:52:16.286565 | orchestrator | 2026-04-01 03:52:16 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:52:16.286668 | orchestrator | 2026-04-01 03:52:16 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:52:19.335360 | orchestrator | 2026-04-01 03:52:19 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:52:19.336891 | orchestrator | 2026-04-01 03:52:19 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:52:19.336920 | orchestrator | 2026-04-01 03:52:19 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:52:22.384995 | orchestrator | 2026-04-01 03:52:22 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:52:22.387328 | orchestrator | 2026-04-01 03:52:22 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:52:22.387446 | orchestrator | 2026-04-01 03:52:22 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:52:25.439097 | orchestrator | 2026-04-01 03:52:25 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:52:25.441351 | orchestrator | 2026-04-01 03:52:25 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:52:25.441594 | orchestrator | 2026-04-01 03:52:25 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:52:28.493886 | orchestrator | 2026-04-01 03:52:28 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:52:28.495281 | orchestrator | 2026-04-01 03:52:28 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:52:28.495416 | orchestrator | 2026-04-01 03:52:28 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:52:31.545790 | orchestrator | 2026-04-01 03:52:31 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:52:31.547112 | orchestrator | 2026-04-01 03:52:31 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:52:31.547175 | orchestrator | 2026-04-01 03:52:31 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:52:34.597277 | orchestrator | 2026-04-01 03:52:34 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:52:34.599537 | orchestrator | 2026-04-01 03:52:34 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:52:34.599610 | orchestrator | 2026-04-01 03:52:34 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:52:37.648441 | orchestrator | 2026-04-01 03:52:37 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:52:37.649983 | orchestrator | 2026-04-01 03:52:37 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:52:37.650144 | orchestrator | 2026-04-01 03:52:37 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:52:40.702852 | orchestrator | 2026-04-01 03:52:40 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:52:40.705618 | orchestrator | 2026-04-01 03:52:40 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:52:40.705868 | orchestrator | 2026-04-01 03:52:40 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:52:43.748046 | orchestrator | 2026-04-01 03:52:43 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:52:43.748922 | orchestrator | 2026-04-01 03:52:43 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:52:43.748957 | orchestrator | 2026-04-01 03:52:43 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:52:46.796647 | orchestrator | 2026-04-01 03:52:46 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:52:46.798003 | orchestrator | 2026-04-01 03:52:46 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:52:46.798195 | orchestrator | 2026-04-01 03:52:46 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:52:49.847280 | orchestrator | 2026-04-01 03:52:49 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:52:49.850122 | orchestrator | 2026-04-01 03:52:49 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:52:49.850236 | orchestrator | 2026-04-01 03:52:49 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:52:52.900466 | orchestrator | 2026-04-01 03:52:52 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:52:52.902310 | orchestrator | 2026-04-01 03:52:52 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:52:52.902480 | orchestrator | 2026-04-01 03:52:52 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:52:55.957116 | orchestrator | 2026-04-01 03:52:55 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:52:55.960320 | orchestrator | 2026-04-01 03:52:55 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:52:55.960388 | orchestrator | 2026-04-01 03:52:55 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:52:59.013366 | orchestrator | 2026-04-01 03:52:59 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:52:59.015503 | orchestrator | 2026-04-01 03:52:59 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:52:59.015618 | orchestrator | 2026-04-01 03:52:59 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:53:02.062779 | orchestrator | 2026-04-01 03:53:02 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:53:02.063928 | orchestrator | 2026-04-01 03:53:02 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:53:02.063982 | orchestrator | 2026-04-01 03:53:02 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:53:05.114367 | orchestrator | 2026-04-01 03:53:05 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:53:05.116701 | orchestrator | 2026-04-01 03:53:05 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:53:05.116833 | orchestrator | 2026-04-01 03:53:05 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:53:08.158142 | orchestrator | 2026-04-01 03:53:08 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:53:08.160808 | orchestrator | 2026-04-01 03:53:08 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:53:08.160860 | orchestrator | 2026-04-01 03:53:08 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:53:11.208134 | orchestrator | 2026-04-01 03:53:11 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:53:11.210359 | orchestrator | 2026-04-01 03:53:11 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:53:11.210439 | orchestrator | 2026-04-01 03:53:11 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:53:14.259281 | orchestrator | 2026-04-01 03:53:14 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:53:14.260891 | orchestrator | 2026-04-01 03:53:14 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:53:14.261020 | orchestrator | 2026-04-01 03:53:14 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:53:17.310823 | orchestrator | 2026-04-01 03:53:17 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:53:17.312000 | orchestrator | 2026-04-01 03:53:17 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:53:17.312063 | orchestrator | 2026-04-01 03:53:17 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:53:20.349171 | orchestrator | 2026-04-01 03:53:20 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:53:20.350154 | orchestrator | 2026-04-01 03:53:20 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:53:20.350178 | orchestrator | 2026-04-01 03:53:20 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:53:23.396440 | orchestrator | 2026-04-01 03:53:23 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:53:23.398695 | orchestrator | 2026-04-01 03:53:23 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:53:23.399411 | orchestrator | 2026-04-01 03:53:23 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:53:26.445263 | orchestrator | 2026-04-01 03:53:26 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:53:26.446730 | orchestrator | 2026-04-01 03:53:26 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:53:26.446826 | orchestrator | 2026-04-01 03:53:26 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:53:29.495524 | orchestrator | 2026-04-01 03:53:29 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:53:29.499934 | orchestrator | 2026-04-01 03:53:29 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:53:29.500034 | orchestrator | 2026-04-01 03:53:29 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:53:32.551382 | orchestrator | 2026-04-01 03:53:32 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:53:32.554829 | orchestrator | 2026-04-01 03:53:32 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:53:32.554913 | orchestrator | 2026-04-01 03:53:32 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:53:35.613401 | orchestrator | 2026-04-01 03:53:35 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:53:35.615309 | orchestrator | 2026-04-01 03:53:35 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:53:35.615557 | orchestrator | 2026-04-01 03:53:35 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:53:38.664602 | orchestrator | 2026-04-01 03:53:38 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:53:38.667167 | orchestrator | 2026-04-01 03:53:38 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:53:38.667219 | orchestrator | 2026-04-01 03:53:38 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:53:41.715913 | orchestrator | 2026-04-01 03:53:41 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:53:41.719104 | orchestrator | 2026-04-01 03:53:41 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:53:41.719233 | orchestrator | 2026-04-01 03:53:41 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:53:44.766130 | orchestrator | 2026-04-01 03:53:44 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:53:44.767273 | orchestrator | 2026-04-01 03:53:44 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:53:44.767304 | orchestrator | 2026-04-01 03:53:44 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:53:47.810770 | orchestrator | 2026-04-01 03:53:47 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:53:47.811793 | orchestrator | 2026-04-01 03:53:47 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:53:47.811834 | orchestrator | 2026-04-01 03:53:47 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:53:50.854100 | orchestrator | 2026-04-01 03:53:50 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:53:50.855597 | orchestrator | 2026-04-01 03:53:50 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:53:50.855650 | orchestrator | 2026-04-01 03:53:50 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:53:53.902479 | orchestrator | 2026-04-01 03:53:53 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:53:53.902925 | orchestrator | 2026-04-01 03:53:53 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:53:53.902958 | orchestrator | 2026-04-01 03:53:53 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:53:56.952522 | orchestrator | 2026-04-01 03:53:56 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:53:56.954249 | orchestrator | 2026-04-01 03:53:56 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:53:56.954381 | orchestrator | 2026-04-01 03:53:56 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:53:59.999823 | orchestrator | 2026-04-01 03:53:59 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:54:00.001196 | orchestrator | 2026-04-01 03:53:59 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:54:00.001260 | orchestrator | 2026-04-01 03:53:59 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:54:03.054448 | orchestrator | 2026-04-01 03:54:03 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:54:03.056968 | orchestrator | 2026-04-01 03:54:03 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:54:03.057021 | orchestrator | 2026-04-01 03:54:03 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:54:06.111421 | orchestrator | 2026-04-01 03:54:06 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:54:06.113278 | orchestrator | 2026-04-01 03:54:06 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:54:06.113339 | orchestrator | 2026-04-01 03:54:06 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:54:09.160333 | orchestrator | 2026-04-01 03:54:09 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:54:09.162185 | orchestrator | 2026-04-01 03:54:09 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:54:09.162237 | orchestrator | 2026-04-01 03:54:09 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:54:12.214740 | orchestrator | 2026-04-01 03:54:12 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:54:12.217699 | orchestrator | 2026-04-01 03:54:12 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:54:12.217765 | orchestrator | 2026-04-01 03:54:12 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:54:15.268990 | orchestrator | 2026-04-01 03:54:15 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:54:15.269594 | orchestrator | 2026-04-01 03:54:15 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:54:15.269621 | orchestrator | 2026-04-01 03:54:15 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:54:18.318150 | orchestrator | 2026-04-01 03:54:18 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:54:18.319699 | orchestrator | 2026-04-01 03:54:18 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:54:18.320014 | orchestrator | 2026-04-01 03:54:18 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:54:21.374939 | orchestrator | 2026-04-01 03:54:21 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:54:21.378511 | orchestrator | 2026-04-01 03:54:21 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:54:21.378594 | orchestrator | 2026-04-01 03:54:21 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:54:24.429547 | orchestrator | 2026-04-01 03:54:24 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:54:24.432060 | orchestrator | 2026-04-01 03:54:24 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:54:24.432138 | orchestrator | 2026-04-01 03:54:24 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:54:27.486081 | orchestrator | 2026-04-01 03:54:27 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:54:27.488683 | orchestrator | 2026-04-01 03:54:27 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:54:27.488789 | orchestrator | 2026-04-01 03:54:27 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:54:30.537724 | orchestrator | 2026-04-01 03:54:30 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:54:30.540889 | orchestrator | 2026-04-01 03:54:30 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:54:30.541027 | orchestrator | 2026-04-01 03:54:30 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:54:33.590168 | orchestrator | 2026-04-01 03:54:33 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:54:33.590504 | orchestrator | 2026-04-01 03:54:33 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:54:33.590539 | orchestrator | 2026-04-01 03:54:33 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:54:36.642560 | orchestrator | 2026-04-01 03:54:36 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:54:36.643764 | orchestrator | 2026-04-01 03:54:36 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:54:36.643789 | orchestrator | 2026-04-01 03:54:36 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:54:39.686097 | orchestrator | 2026-04-01 03:54:39 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:54:39.687641 | orchestrator | 2026-04-01 03:54:39 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:54:39.687687 | orchestrator | 2026-04-01 03:54:39 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:54:42.732692 | orchestrator | 2026-04-01 03:54:42 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:54:42.733553 | orchestrator | 2026-04-01 03:54:42 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:54:42.733588 | orchestrator | 2026-04-01 03:54:42 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:54:45.779263 | orchestrator | 2026-04-01 03:54:45 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:54:45.779665 | orchestrator | 2026-04-01 03:54:45 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:54:45.779687 | orchestrator | 2026-04-01 03:54:45 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:54:48.827483 | orchestrator | 2026-04-01 03:54:48 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:54:48.828270 | orchestrator | 2026-04-01 03:54:48 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:54:48.828303 | orchestrator | 2026-04-01 03:54:48 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:54:51.869381 | orchestrator | 2026-04-01 03:54:51 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:54:51.871220 | orchestrator | 2026-04-01 03:54:51 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:54:51.871263 | orchestrator | 2026-04-01 03:54:51 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:54:54.916966 | orchestrator | 2026-04-01 03:54:54 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:54:54.918554 | orchestrator | 2026-04-01 03:54:54 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:54:54.918614 | orchestrator | 2026-04-01 03:54:54 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:54:57.974149 | orchestrator | 2026-04-01 03:54:57 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:54:57.975821 | orchestrator | 2026-04-01 03:54:57 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:54:57.975975 | orchestrator | 2026-04-01 03:54:57 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:55:01.018006 | orchestrator | 2026-04-01 03:55:01 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:55:01.018956 | orchestrator | 2026-04-01 03:55:01 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:55:01.018992 | orchestrator | 2026-04-01 03:55:01 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:55:04.072298 | orchestrator | 2026-04-01 03:55:04 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:55:04.073939 | orchestrator | 2026-04-01 03:55:04 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:55:04.074005 | orchestrator | 2026-04-01 03:55:04 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:55:07.124294 | orchestrator | 2026-04-01 03:55:07 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:55:07.125932 | orchestrator | 2026-04-01 03:55:07 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:55:07.125973 | orchestrator | 2026-04-01 03:55:07 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:55:10.179655 | orchestrator | 2026-04-01 03:55:10 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:55:10.181084 | orchestrator | 2026-04-01 03:55:10 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:55:10.181120 | orchestrator | 2026-04-01 03:55:10 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:55:13.241103 | orchestrator | 2026-04-01 03:55:13 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:55:13.243047 | orchestrator | 2026-04-01 03:55:13 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:55:13.243094 | orchestrator | 2026-04-01 03:55:13 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:55:16.297633 | orchestrator | 2026-04-01 03:55:16 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:55:16.299658 | orchestrator | 2026-04-01 03:55:16 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:55:16.299697 | orchestrator | 2026-04-01 03:55:16 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:55:19.347093 | orchestrator | 2026-04-01 03:55:19 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:55:19.348688 | orchestrator | 2026-04-01 03:55:19 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:55:19.348751 | orchestrator | 2026-04-01 03:55:19 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:55:22.402206 | orchestrator | 2026-04-01 03:55:22 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:55:22.404189 | orchestrator | 2026-04-01 03:55:22 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:55:22.404252 | orchestrator | 2026-04-01 03:55:22 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:55:25.448465 | orchestrator | 2026-04-01 03:55:25 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:55:25.450957 | orchestrator | 2026-04-01 03:55:25 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:55:25.451022 | orchestrator | 2026-04-01 03:55:25 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:55:28.499716 | orchestrator | 2026-04-01 03:55:28 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:55:28.500128 | orchestrator | 2026-04-01 03:55:28 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:55:28.500339 | orchestrator | 2026-04-01 03:55:28 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:55:31.543865 | orchestrator | 2026-04-01 03:55:31 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:55:31.545786 | orchestrator | 2026-04-01 03:55:31 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:55:31.545835 | orchestrator | 2026-04-01 03:55:31 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:55:34.595291 | orchestrator | 2026-04-01 03:55:34 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:55:34.596288 | orchestrator | 2026-04-01 03:55:34 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:55:34.596333 | orchestrator | 2026-04-01 03:55:34 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:55:37.642278 | orchestrator | 2026-04-01 03:55:37 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:55:37.644012 | orchestrator | 2026-04-01 03:55:37 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:55:37.644052 | orchestrator | 2026-04-01 03:55:37 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:55:40.690306 | orchestrator | 2026-04-01 03:55:40 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:55:40.691048 | orchestrator | 2026-04-01 03:55:40 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:55:40.691082 | orchestrator | 2026-04-01 03:55:40 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:55:43.739382 | orchestrator | 2026-04-01 03:55:43 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:55:43.741362 | orchestrator | 2026-04-01 03:55:43 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:55:43.741429 | orchestrator | 2026-04-01 03:55:43 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:55:46.783052 | orchestrator | 2026-04-01 03:55:46 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:55:46.783269 | orchestrator | 2026-04-01 03:55:46 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:55:46.783532 | orchestrator | 2026-04-01 03:55:46 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:55:49.832194 | orchestrator | 2026-04-01 03:55:49 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:55:49.832989 | orchestrator | 2026-04-01 03:55:49 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:55:49.833205 | orchestrator | 2026-04-01 03:55:49 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:55:52.881637 | orchestrator | 2026-04-01 03:55:52 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:55:52.882974 | orchestrator | 2026-04-01 03:55:52 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:55:52.883030 | orchestrator | 2026-04-01 03:55:52 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:55:55.932125 | orchestrator | 2026-04-01 03:55:55 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:55:55.934469 | orchestrator | 2026-04-01 03:55:55 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:55:55.934557 | orchestrator | 2026-04-01 03:55:55 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:55:58.984832 | orchestrator | 2026-04-01 03:55:58 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:55:58.988630 | orchestrator | 2026-04-01 03:55:58 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:55:58.988757 | orchestrator | 2026-04-01 03:55:58 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:56:02.040258 | orchestrator | 2026-04-01 03:56:02 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:56:02.044026 | orchestrator | 2026-04-01 03:56:02 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:56:02.044095 | orchestrator | 2026-04-01 03:56:02 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:56:05.097266 | orchestrator | 2026-04-01 03:56:05 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:56:05.100346 | orchestrator | 2026-04-01 03:56:05 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:56:05.100421 | orchestrator | 2026-04-01 03:56:05 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:56:08.144056 | orchestrator | 2026-04-01 03:56:08 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:56:08.145835 | orchestrator | 2026-04-01 03:56:08 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:56:08.145943 | orchestrator | 2026-04-01 03:56:08 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:56:11.197670 | orchestrator | 2026-04-01 03:56:11 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:56:11.199775 | orchestrator | 2026-04-01 03:56:11 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:56:11.199835 | orchestrator | 2026-04-01 03:56:11 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:56:14.254232 | orchestrator | 2026-04-01 03:56:14 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:56:14.256863 | orchestrator | 2026-04-01 03:56:14 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:56:14.256923 | orchestrator | 2026-04-01 03:56:14 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:56:17.303165 | orchestrator | 2026-04-01 03:56:17 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:56:17.305038 | orchestrator | 2026-04-01 03:56:17 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:56:17.305113 | orchestrator | 2026-04-01 03:56:17 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:56:20.353462 | orchestrator | 2026-04-01 03:56:20 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:56:20.354088 | orchestrator | 2026-04-01 03:56:20 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:56:20.354166 | orchestrator | 2026-04-01 03:56:20 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:56:23.404499 | orchestrator | 2026-04-01 03:56:23 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:56:23.407255 | orchestrator | 2026-04-01 03:56:23 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:56:23.407355 | orchestrator | 2026-04-01 03:56:23 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:56:26.455731 | orchestrator | 2026-04-01 03:56:26 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:56:26.457639 | orchestrator | 2026-04-01 03:56:26 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:56:26.457701 | orchestrator | 2026-04-01 03:56:26 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:56:29.503016 | orchestrator | 2026-04-01 03:56:29 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:56:29.504346 | orchestrator | 2026-04-01 03:56:29 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:56:29.504543 | orchestrator | 2026-04-01 03:56:29 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:56:32.556301 | orchestrator | 2026-04-01 03:56:32 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:56:32.557923 | orchestrator | 2026-04-01 03:56:32 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:56:32.557970 | orchestrator | 2026-04-01 03:56:32 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:56:35.601426 | orchestrator | 2026-04-01 03:56:35 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:56:35.603266 | orchestrator | 2026-04-01 03:56:35 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:56:35.603352 | orchestrator | 2026-04-01 03:56:35 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:56:38.649500 | orchestrator | 2026-04-01 03:56:38 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:56:38.651338 | orchestrator | 2026-04-01 03:56:38 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:56:38.651438 | orchestrator | 2026-04-01 03:56:38 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:56:41.698350 | orchestrator | 2026-04-01 03:56:41 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:56:41.699993 | orchestrator | 2026-04-01 03:56:41 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:56:41.700109 | orchestrator | 2026-04-01 03:56:41 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:56:44.744112 | orchestrator | 2026-04-01 03:56:44 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:56:44.745742 | orchestrator | 2026-04-01 03:56:44 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:56:44.745778 | orchestrator | 2026-04-01 03:56:44 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:58:47.860830 | orchestrator | 2026-04-01 03:58:47 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:58:47.860962 | orchestrator | 2026-04-01 03:58:47 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:58:47.860986 | orchestrator | 2026-04-01 03:58:47 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:58:50.904387 | orchestrator | 2026-04-01 03:58:50 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:58:50.906045 | orchestrator | 2026-04-01 03:58:50 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:58:50.906093 | orchestrator | 2026-04-01 03:58:50 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:58:53.952281 | orchestrator | 2026-04-01 03:58:53 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:58:53.953805 | orchestrator | 2026-04-01 03:58:53 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:58:53.953879 | orchestrator | 2026-04-01 03:58:53 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:58:56.996298 | orchestrator | 2026-04-01 03:58:56 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:58:56.998761 | orchestrator | 2026-04-01 03:58:56 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:58:56.998835 | orchestrator | 2026-04-01 03:58:56 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:59:00.037895 | orchestrator | 2026-04-01 03:59:00 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:59:00.040044 | orchestrator | 2026-04-01 03:59:00 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:59:00.040097 | orchestrator | 2026-04-01 03:59:00 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:59:03.085448 | orchestrator | 2026-04-01 03:59:03 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:59:03.086536 | orchestrator | 2026-04-01 03:59:03 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:59:03.086568 | orchestrator | 2026-04-01 03:59:03 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:59:06.134638 | orchestrator | 2026-04-01 03:59:06 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:59:06.138239 | orchestrator | 2026-04-01 03:59:06 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:59:06.138382 | orchestrator | 2026-04-01 03:59:06 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:59:09.189525 | orchestrator | 2026-04-01 03:59:09 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:59:09.192082 | orchestrator | 2026-04-01 03:59:09 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:59:09.192146 | orchestrator | 2026-04-01 03:59:09 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:59:12.235805 | orchestrator | 2026-04-01 03:59:12 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:59:12.236617 | orchestrator | 2026-04-01 03:59:12 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:59:12.236963 | orchestrator | 2026-04-01 03:59:12 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:59:15.282067 | orchestrator | 2026-04-01 03:59:15 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:59:15.282954 | orchestrator | 2026-04-01 03:59:15 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:59:15.282981 | orchestrator | 2026-04-01 03:59:15 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:59:18.340305 | orchestrator | 2026-04-01 03:59:18 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:59:18.343514 | orchestrator | 2026-04-01 03:59:18 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:59:18.343582 | orchestrator | 2026-04-01 03:59:18 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:59:21.381286 | orchestrator | 2026-04-01 03:59:21 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:59:21.381497 | orchestrator | 2026-04-01 03:59:21 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:59:21.381513 | orchestrator | 2026-04-01 03:59:21 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:59:24.427223 | orchestrator | 2026-04-01 03:59:24 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:59:24.428718 | orchestrator | 2026-04-01 03:59:24 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:59:24.428781 | orchestrator | 2026-04-01 03:59:24 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:59:27.473505 | orchestrator | 2026-04-01 03:59:27 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:59:27.475956 | orchestrator | 2026-04-01 03:59:27 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:59:27.476015 | orchestrator | 2026-04-01 03:59:27 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:59:30.522643 | orchestrator | 2026-04-01 03:59:30 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:59:30.523335 | orchestrator | 2026-04-01 03:59:30 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:59:30.523840 | orchestrator | 2026-04-01 03:59:30 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:59:33.573411 | orchestrator | 2026-04-01 03:59:33 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:59:33.575633 | orchestrator | 2026-04-01 03:59:33 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:59:33.575717 | orchestrator | 2026-04-01 03:59:33 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:59:36.624606 | orchestrator | 2026-04-01 03:59:36 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:59:36.625926 | orchestrator | 2026-04-01 03:59:36 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:59:36.625961 | orchestrator | 2026-04-01 03:59:36 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:59:39.672324 | orchestrator | 2026-04-01 03:59:39 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:59:39.674590 | orchestrator | 2026-04-01 03:59:39 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:59:39.674733 | orchestrator | 2026-04-01 03:59:39 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:59:42.719990 | orchestrator | 2026-04-01 03:59:42 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:59:42.722904 | orchestrator | 2026-04-01 03:59:42 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:59:42.722974 | orchestrator | 2026-04-01 03:59:42 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:59:45.769248 | orchestrator | 2026-04-01 03:59:45 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:59:45.770562 | orchestrator | 2026-04-01 03:59:45 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:59:45.770617 | orchestrator | 2026-04-01 03:59:45 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:59:48.815556 | orchestrator | 2026-04-01 03:59:48 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:59:48.817899 | orchestrator | 2026-04-01 03:59:48 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:59:48.817996 | orchestrator | 2026-04-01 03:59:48 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:59:51.867249 | orchestrator | 2026-04-01 03:59:51 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:59:51.869131 | orchestrator | 2026-04-01 03:59:51 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:59:51.869238 | orchestrator | 2026-04-01 03:59:51 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:59:54.917615 | orchestrator | 2026-04-01 03:59:54 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:59:54.919209 | orchestrator | 2026-04-01 03:59:54 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:59:54.919299 | orchestrator | 2026-04-01 03:59:54 | INFO  | Wait 1 second(s) until the next check 2026-04-01 03:59:57.963100 | orchestrator | 2026-04-01 03:59:57 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 03:59:57.964749 | orchestrator | 2026-04-01 03:59:57 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 03:59:57.964912 | orchestrator | 2026-04-01 03:59:57 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:00:01.012714 | orchestrator | 2026-04-01 04:00:01 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:00:01.015754 | orchestrator | 2026-04-01 04:00:01 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:00:01.015859 | orchestrator | 2026-04-01 04:00:01 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:00:04.068157 | orchestrator | 2026-04-01 04:00:04 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:00:04.070877 | orchestrator | 2026-04-01 04:00:04 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:00:04.070953 | orchestrator | 2026-04-01 04:00:04 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:00:07.121757 | orchestrator | 2026-04-01 04:00:07 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:00:07.125979 | orchestrator | 2026-04-01 04:00:07 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:00:07.126086 | orchestrator | 2026-04-01 04:00:07 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:00:10.169916 | orchestrator | 2026-04-01 04:00:10 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:00:10.170774 | orchestrator | 2026-04-01 04:00:10 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:00:10.170811 | orchestrator | 2026-04-01 04:00:10 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:00:13.215835 | orchestrator | 2026-04-01 04:00:13 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:00:13.217950 | orchestrator | 2026-04-01 04:00:13 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:00:13.218233 | orchestrator | 2026-04-01 04:00:13 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:00:16.268510 | orchestrator | 2026-04-01 04:00:16 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:00:16.270827 | orchestrator | 2026-04-01 04:00:16 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:00:16.270917 | orchestrator | 2026-04-01 04:00:16 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:00:19.320443 | orchestrator | 2026-04-01 04:00:19 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:00:19.321843 | orchestrator | 2026-04-01 04:00:19 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:00:19.321976 | orchestrator | 2026-04-01 04:00:19 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:00:22.370368 | orchestrator | 2026-04-01 04:00:22 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:00:22.372723 | orchestrator | 2026-04-01 04:00:22 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:00:22.372806 | orchestrator | 2026-04-01 04:00:22 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:00:25.422253 | orchestrator | 2026-04-01 04:00:25 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:00:25.424531 | orchestrator | 2026-04-01 04:00:25 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:00:25.424627 | orchestrator | 2026-04-01 04:00:25 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:00:28.470692 | orchestrator | 2026-04-01 04:00:28 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:00:28.472415 | orchestrator | 2026-04-01 04:00:28 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:00:28.472483 | orchestrator | 2026-04-01 04:00:28 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:00:31.516169 | orchestrator | 2026-04-01 04:00:31 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:00:31.518930 | orchestrator | 2026-04-01 04:00:31 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:00:31.519035 | orchestrator | 2026-04-01 04:00:31 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:00:34.566383 | orchestrator | 2026-04-01 04:00:34 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:00:34.568756 | orchestrator | 2026-04-01 04:00:34 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:00:34.568823 | orchestrator | 2026-04-01 04:00:34 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:00:37.617153 | orchestrator | 2026-04-01 04:00:37 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:00:37.618718 | orchestrator | 2026-04-01 04:00:37 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:00:37.618827 | orchestrator | 2026-04-01 04:00:37 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:00:40.654190 | orchestrator | 2026-04-01 04:00:40 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:00:40.655813 | orchestrator | 2026-04-01 04:00:40 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:00:40.655859 | orchestrator | 2026-04-01 04:00:40 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:00:43.705422 | orchestrator | 2026-04-01 04:00:43 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:00:43.706574 | orchestrator | 2026-04-01 04:00:43 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:00:43.706776 | orchestrator | 2026-04-01 04:00:43 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:00:46.757713 | orchestrator | 2026-04-01 04:00:46 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:00:46.759431 | orchestrator | 2026-04-01 04:00:46 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:00:46.759622 | orchestrator | 2026-04-01 04:00:46 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:00:49.811973 | orchestrator | 2026-04-01 04:00:49 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:00:49.813868 | orchestrator | 2026-04-01 04:00:49 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:00:49.813942 | orchestrator | 2026-04-01 04:00:49 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:00:52.865729 | orchestrator | 2026-04-01 04:00:52 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:00:52.867889 | orchestrator | 2026-04-01 04:00:52 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:00:52.867963 | orchestrator | 2026-04-01 04:00:52 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:00:55.913761 | orchestrator | 2026-04-01 04:00:55 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:00:55.914952 | orchestrator | 2026-04-01 04:00:55 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:00:55.914991 | orchestrator | 2026-04-01 04:00:55 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:00:58.962742 | orchestrator | 2026-04-01 04:00:58 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:00:58.964717 | orchestrator | 2026-04-01 04:00:58 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:00:58.964756 | orchestrator | 2026-04-01 04:00:58 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:01:02.010210 | orchestrator | 2026-04-01 04:01:02 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:01:02.011672 | orchestrator | 2026-04-01 04:01:02 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:01:02.011754 | orchestrator | 2026-04-01 04:01:02 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:01:05.064909 | orchestrator | 2026-04-01 04:01:05 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:01:05.065736 | orchestrator | 2026-04-01 04:01:05 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:01:05.066287 | orchestrator | 2026-04-01 04:01:05 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:01:08.110201 | orchestrator | 2026-04-01 04:01:08 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:01:08.113018 | orchestrator | 2026-04-01 04:01:08 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:01:08.113085 | orchestrator | 2026-04-01 04:01:08 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:01:11.158243 | orchestrator | 2026-04-01 04:01:11 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:01:11.159821 | orchestrator | 2026-04-01 04:01:11 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:01:11.159863 | orchestrator | 2026-04-01 04:01:11 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:01:14.204439 | orchestrator | 2026-04-01 04:01:14 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:01:14.206325 | orchestrator | 2026-04-01 04:01:14 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:01:14.206374 | orchestrator | 2026-04-01 04:01:14 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:01:17.252759 | orchestrator | 2026-04-01 04:01:17 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:01:17.254782 | orchestrator | 2026-04-01 04:01:17 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:01:17.254971 | orchestrator | 2026-04-01 04:01:17 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:01:20.299531 | orchestrator | 2026-04-01 04:01:20 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:01:20.300762 | orchestrator | 2026-04-01 04:01:20 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:01:20.300799 | orchestrator | 2026-04-01 04:01:20 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:01:23.345517 | orchestrator | 2026-04-01 04:01:23 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:01:23.347410 | orchestrator | 2026-04-01 04:01:23 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:01:23.347504 | orchestrator | 2026-04-01 04:01:23 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:01:26.393607 | orchestrator | 2026-04-01 04:01:26 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:01:26.396622 | orchestrator | 2026-04-01 04:01:26 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:01:26.396689 | orchestrator | 2026-04-01 04:01:26 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:01:29.434681 | orchestrator | 2026-04-01 04:01:29 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:01:29.436954 | orchestrator | 2026-04-01 04:01:29 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:01:29.437002 | orchestrator | 2026-04-01 04:01:29 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:01:32.477510 | orchestrator | 2026-04-01 04:01:32 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:01:32.480385 | orchestrator | 2026-04-01 04:01:32 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:01:32.480442 | orchestrator | 2026-04-01 04:01:32 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:01:35.528169 | orchestrator | 2026-04-01 04:01:35 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:01:35.530724 | orchestrator | 2026-04-01 04:01:35 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:01:35.530870 | orchestrator | 2026-04-01 04:01:35 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:01:38.577142 | orchestrator | 2026-04-01 04:01:38 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:01:38.581595 | orchestrator | 2026-04-01 04:01:38 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:01:38.581673 | orchestrator | 2026-04-01 04:01:38 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:01:41.621707 | orchestrator | 2026-04-01 04:01:41 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:01:41.624310 | orchestrator | 2026-04-01 04:01:41 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:01:41.624391 | orchestrator | 2026-04-01 04:01:41 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:01:44.673781 | orchestrator | 2026-04-01 04:01:44 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:01:44.675653 | orchestrator | 2026-04-01 04:01:44 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:01:44.675733 | orchestrator | 2026-04-01 04:01:44 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:01:47.717462 | orchestrator | 2026-04-01 04:01:47 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:01:47.719049 | orchestrator | 2026-04-01 04:01:47 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:01:47.719124 | orchestrator | 2026-04-01 04:01:47 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:01:50.760680 | orchestrator | 2026-04-01 04:01:50 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:01:50.761330 | orchestrator | 2026-04-01 04:01:50 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:01:50.761397 | orchestrator | 2026-04-01 04:01:50 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:01:53.805963 | orchestrator | 2026-04-01 04:01:53 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:01:53.806367 | orchestrator | 2026-04-01 04:01:53 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:01:53.806726 | orchestrator | 2026-04-01 04:01:53 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:01:56.850430 | orchestrator | 2026-04-01 04:01:56 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:01:56.852123 | orchestrator | 2026-04-01 04:01:56 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:01:56.852188 | orchestrator | 2026-04-01 04:01:56 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:01:59.895667 | orchestrator | 2026-04-01 04:01:59 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:01:59.897713 | orchestrator | 2026-04-01 04:01:59 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:01:59.897755 | orchestrator | 2026-04-01 04:01:59 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:02:02.942522 | orchestrator | 2026-04-01 04:02:02 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:02:02.944330 | orchestrator | 2026-04-01 04:02:02 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:02:02.944454 | orchestrator | 2026-04-01 04:02:02 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:02:05.988128 | orchestrator | 2026-04-01 04:02:05 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:02:05.990350 | orchestrator | 2026-04-01 04:02:05 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:02:05.990401 | orchestrator | 2026-04-01 04:02:05 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:02:09.040162 | orchestrator | 2026-04-01 04:02:09 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:02:09.042838 | orchestrator | 2026-04-01 04:02:09 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:02:09.042949 | orchestrator | 2026-04-01 04:02:09 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:02:12.087393 | orchestrator | 2026-04-01 04:02:12 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:02:12.089511 | orchestrator | 2026-04-01 04:02:12 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:02:12.089688 | orchestrator | 2026-04-01 04:02:12 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:02:15.134655 | orchestrator | 2026-04-01 04:02:15 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:02:15.136093 | orchestrator | 2026-04-01 04:02:15 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:02:15.136147 | orchestrator | 2026-04-01 04:02:15 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:02:18.186685 | orchestrator | 2026-04-01 04:02:18 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:02:18.189721 | orchestrator | 2026-04-01 04:02:18 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:02:18.189778 | orchestrator | 2026-04-01 04:02:18 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:02:21.235986 | orchestrator | 2026-04-01 04:02:21 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:02:21.237430 | orchestrator | 2026-04-01 04:02:21 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:02:21.237513 | orchestrator | 2026-04-01 04:02:21 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:02:24.281940 | orchestrator | 2026-04-01 04:02:24 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:02:24.282988 | orchestrator | 2026-04-01 04:02:24 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:02:24.283028 | orchestrator | 2026-04-01 04:02:24 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:02:27.330978 | orchestrator | 2026-04-01 04:02:27 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:02:27.332574 | orchestrator | 2026-04-01 04:02:27 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:02:27.332624 | orchestrator | 2026-04-01 04:02:27 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:02:30.379677 | orchestrator | 2026-04-01 04:02:30 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:02:30.381899 | orchestrator | 2026-04-01 04:02:30 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:02:30.382052 | orchestrator | 2026-04-01 04:02:30 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:02:33.430354 | orchestrator | 2026-04-01 04:02:33 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:02:33.432116 | orchestrator | 2026-04-01 04:02:33 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:02:33.432294 | orchestrator | 2026-04-01 04:02:33 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:02:36.479132 | orchestrator | 2026-04-01 04:02:36 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:02:36.480457 | orchestrator | 2026-04-01 04:02:36 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:02:36.480493 | orchestrator | 2026-04-01 04:02:36 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:02:39.526471 | orchestrator | 2026-04-01 04:02:39 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:02:39.528662 | orchestrator | 2026-04-01 04:02:39 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:02:39.528739 | orchestrator | 2026-04-01 04:02:39 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:02:42.573245 | orchestrator | 2026-04-01 04:02:42 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:02:42.574561 | orchestrator | 2026-04-01 04:02:42 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:02:42.574599 | orchestrator | 2026-04-01 04:02:42 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:02:45.625679 | orchestrator | 2026-04-01 04:02:45 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:02:45.628414 | orchestrator | 2026-04-01 04:02:45 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:02:45.628450 | orchestrator | 2026-04-01 04:02:45 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:02:48.673349 | orchestrator | 2026-04-01 04:02:48 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:02:48.676827 | orchestrator | 2026-04-01 04:02:48 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:02:48.677300 | orchestrator | 2026-04-01 04:02:48 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:02:51.720404 | orchestrator | 2026-04-01 04:02:51 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:02:51.721387 | orchestrator | 2026-04-01 04:02:51 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:02:51.721527 | orchestrator | 2026-04-01 04:02:51 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:02:54.768947 | orchestrator | 2026-04-01 04:02:54 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:02:54.771441 | orchestrator | 2026-04-01 04:02:54 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:02:54.771494 | orchestrator | 2026-04-01 04:02:54 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:02:57.817061 | orchestrator | 2026-04-01 04:02:57 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:02:57.819109 | orchestrator | 2026-04-01 04:02:57 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:02:57.819166 | orchestrator | 2026-04-01 04:02:57 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:03:00.875235 | orchestrator | 2026-04-01 04:03:00 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:03:00.880063 | orchestrator | 2026-04-01 04:03:00 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:03:00.880142 | orchestrator | 2026-04-01 04:03:00 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:03:03.921262 | orchestrator | 2026-04-01 04:03:03 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:03:03.923018 | orchestrator | 2026-04-01 04:03:03 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:03:03.923064 | orchestrator | 2026-04-01 04:03:03 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:03:06.975710 | orchestrator | 2026-04-01 04:03:06 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:03:06.977520 | orchestrator | 2026-04-01 04:03:06 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:03:06.977742 | orchestrator | 2026-04-01 04:03:06 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:03:10.022919 | orchestrator | 2026-04-01 04:03:10 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:03:10.026454 | orchestrator | 2026-04-01 04:03:10 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:03:10.026532 | orchestrator | 2026-04-01 04:03:10 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:03:13.071518 | orchestrator | 2026-04-01 04:03:13 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:03:13.072910 | orchestrator | 2026-04-01 04:03:13 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:03:13.072990 | orchestrator | 2026-04-01 04:03:13 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:03:16.118362 | orchestrator | 2026-04-01 04:03:16 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:03:16.119625 | orchestrator | 2026-04-01 04:03:16 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:03:16.119678 | orchestrator | 2026-04-01 04:03:16 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:03:19.166700 | orchestrator | 2026-04-01 04:03:19 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:03:19.168434 | orchestrator | 2026-04-01 04:03:19 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:03:19.168509 | orchestrator | 2026-04-01 04:03:19 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:03:22.215211 | orchestrator | 2026-04-01 04:03:22 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:03:22.217740 | orchestrator | 2026-04-01 04:03:22 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:03:22.217791 | orchestrator | 2026-04-01 04:03:22 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:03:25.252871 | orchestrator | 2026-04-01 04:03:25 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:03:25.254133 | orchestrator | 2026-04-01 04:03:25 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:03:25.254158 | orchestrator | 2026-04-01 04:03:25 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:03:28.296338 | orchestrator | 2026-04-01 04:03:28 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:03:28.298182 | orchestrator | 2026-04-01 04:03:28 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:03:28.298238 | orchestrator | 2026-04-01 04:03:28 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:03:31.341482 | orchestrator | 2026-04-01 04:03:31 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:03:31.342767 | orchestrator | 2026-04-01 04:03:31 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:03:31.342798 | orchestrator | 2026-04-01 04:03:31 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:03:34.390683 | orchestrator | 2026-04-01 04:03:34 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:03:34.393681 | orchestrator | 2026-04-01 04:03:34 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:03:34.393733 | orchestrator | 2026-04-01 04:03:34 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:03:37.438299 | orchestrator | 2026-04-01 04:03:37 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:03:37.439821 | orchestrator | 2026-04-01 04:03:37 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:03:37.439881 | orchestrator | 2026-04-01 04:03:37 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:03:40.479351 | orchestrator | 2026-04-01 04:03:40 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:03:40.480611 | orchestrator | 2026-04-01 04:03:40 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:03:40.480690 | orchestrator | 2026-04-01 04:03:40 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:03:43.527455 | orchestrator | 2026-04-01 04:03:43 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:03:43.528875 | orchestrator | 2026-04-01 04:03:43 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:03:43.529035 | orchestrator | 2026-04-01 04:03:43 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:03:46.578672 | orchestrator | 2026-04-01 04:03:46 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:03:46.580370 | orchestrator | 2026-04-01 04:03:46 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:03:46.580447 | orchestrator | 2026-04-01 04:03:46 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:03:49.627408 | orchestrator | 2026-04-01 04:03:49 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:03:49.629328 | orchestrator | 2026-04-01 04:03:49 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:03:49.629390 | orchestrator | 2026-04-01 04:03:49 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:03:52.675941 | orchestrator | 2026-04-01 04:03:52 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:03:52.677607 | orchestrator | 2026-04-01 04:03:52 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:03:52.677687 | orchestrator | 2026-04-01 04:03:52 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:03:55.721320 | orchestrator | 2026-04-01 04:03:55 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:03:55.723172 | orchestrator | 2026-04-01 04:03:55 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:03:55.723376 | orchestrator | 2026-04-01 04:03:55 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:03:58.774521 | orchestrator | 2026-04-01 04:03:58 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:03:58.776518 | orchestrator | 2026-04-01 04:03:58 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:03:58.776600 | orchestrator | 2026-04-01 04:03:58 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:04:01.819649 | orchestrator | 2026-04-01 04:04:01 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:04:01.821155 | orchestrator | 2026-04-01 04:04:01 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:04:01.821194 | orchestrator | 2026-04-01 04:04:01 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:04:04.865874 | orchestrator | 2026-04-01 04:04:04 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:04:04.868038 | orchestrator | 2026-04-01 04:04:04 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:04:04.868107 | orchestrator | 2026-04-01 04:04:04 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:04:07.917277 | orchestrator | 2026-04-01 04:04:07 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:04:07.918384 | orchestrator | 2026-04-01 04:04:07 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:04:07.918439 | orchestrator | 2026-04-01 04:04:07 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:04:10.962883 | orchestrator | 2026-04-01 04:04:10 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:04:10.966364 | orchestrator | 2026-04-01 04:04:10 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:04:10.966448 | orchestrator | 2026-04-01 04:04:10 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:04:14.016656 | orchestrator | 2026-04-01 04:04:14 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:04:14.019334 | orchestrator | 2026-04-01 04:04:14 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:04:14.019400 | orchestrator | 2026-04-01 04:04:14 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:04:17.061923 | orchestrator | 2026-04-01 04:04:17 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:04:17.063955 | orchestrator | 2026-04-01 04:04:17 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:04:17.064146 | orchestrator | 2026-04-01 04:04:17 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:04:20.106958 | orchestrator | 2026-04-01 04:04:20 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:04:20.109530 | orchestrator | 2026-04-01 04:04:20 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:04:20.109581 | orchestrator | 2026-04-01 04:04:20 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:04:23.156573 | orchestrator | 2026-04-01 04:04:23 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:04:23.160082 | orchestrator | 2026-04-01 04:04:23 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:04:23.160119 | orchestrator | 2026-04-01 04:04:23 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:04:26.204513 | orchestrator | 2026-04-01 04:04:26 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:04:26.207285 | orchestrator | 2026-04-01 04:04:26 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:04:26.207372 | orchestrator | 2026-04-01 04:04:26 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:04:29.255846 | orchestrator | 2026-04-01 04:04:29 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:04:29.257346 | orchestrator | 2026-04-01 04:04:29 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:04:29.257394 | orchestrator | 2026-04-01 04:04:29 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:04:32.304332 | orchestrator | 2026-04-01 04:04:32 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:04:32.305350 | orchestrator | 2026-04-01 04:04:32 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:04:32.305409 | orchestrator | 2026-04-01 04:04:32 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:04:35.348956 | orchestrator | 2026-04-01 04:04:35 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:04:35.350101 | orchestrator | 2026-04-01 04:04:35 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:04:35.350142 | orchestrator | 2026-04-01 04:04:35 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:04:38.403760 | orchestrator | 2026-04-01 04:04:38 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:04:38.406426 | orchestrator | 2026-04-01 04:04:38 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:04:38.406479 | orchestrator | 2026-04-01 04:04:38 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:04:41.449692 | orchestrator | 2026-04-01 04:04:41 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:04:41.452556 | orchestrator | 2026-04-01 04:04:41 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:04:41.452653 | orchestrator | 2026-04-01 04:04:41 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:04:44.498960 | orchestrator | 2026-04-01 04:04:44 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:04:44.501158 | orchestrator | 2026-04-01 04:04:44 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:04:44.501207 | orchestrator | 2026-04-01 04:04:44 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:04:47.546159 | orchestrator | 2026-04-01 04:04:47 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:04:47.547151 | orchestrator | 2026-04-01 04:04:47 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:04:47.547308 | orchestrator | 2026-04-01 04:04:47 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:04:50.591037 | orchestrator | 2026-04-01 04:04:50 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:04:50.592063 | orchestrator | 2026-04-01 04:04:50 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:04:50.592155 | orchestrator | 2026-04-01 04:04:50 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:04:53.639646 | orchestrator | 2026-04-01 04:04:53 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:04:53.641385 | orchestrator | 2026-04-01 04:04:53 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:04:53.641627 | orchestrator | 2026-04-01 04:04:53 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:04:56.688494 | orchestrator | 2026-04-01 04:04:56 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:04:56.691832 | orchestrator | 2026-04-01 04:04:56 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:04:56.692068 | orchestrator | 2026-04-01 04:04:56 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:04:59.735127 | orchestrator | 2026-04-01 04:04:59 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:04:59.737006 | orchestrator | 2026-04-01 04:04:59 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:04:59.737095 | orchestrator | 2026-04-01 04:04:59 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:05:02.781605 | orchestrator | 2026-04-01 04:05:02 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:05:02.783446 | orchestrator | 2026-04-01 04:05:02 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:05:02.783524 | orchestrator | 2026-04-01 04:05:02 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:05:05.830682 | orchestrator | 2026-04-01 04:05:05 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:05:05.834076 | orchestrator | 2026-04-01 04:05:05 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:05:05.834432 | orchestrator | 2026-04-01 04:05:05 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:05:08.886140 | orchestrator | 2026-04-01 04:05:08 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:05:08.887765 | orchestrator | 2026-04-01 04:05:08 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:05:08.887921 | orchestrator | 2026-04-01 04:05:08 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:05:11.936064 | orchestrator | 2026-04-01 04:05:11 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:05:11.936948 | orchestrator | 2026-04-01 04:05:11 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:05:11.937147 | orchestrator | 2026-04-01 04:05:11 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:05:14.991011 | orchestrator | 2026-04-01 04:05:14 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:05:14.992589 | orchestrator | 2026-04-01 04:05:14 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:05:14.992681 | orchestrator | 2026-04-01 04:05:14 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:05:18.048729 | orchestrator | 2026-04-01 04:05:18 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:05:18.051380 | orchestrator | 2026-04-01 04:05:18 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:05:18.051475 | orchestrator | 2026-04-01 04:05:18 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:05:21.101391 | orchestrator | 2026-04-01 04:05:21 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:05:21.103050 | orchestrator | 2026-04-01 04:05:21 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:05:21.103090 | orchestrator | 2026-04-01 04:05:21 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:05:24.149169 | orchestrator | 2026-04-01 04:05:24 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:05:24.153931 | orchestrator | 2026-04-01 04:05:24 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:05:24.153994 | orchestrator | 2026-04-01 04:05:24 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:05:27.192004 | orchestrator | 2026-04-01 04:05:27 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:05:27.192821 | orchestrator | 2026-04-01 04:05:27 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:05:27.192859 | orchestrator | 2026-04-01 04:05:27 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:05:30.233919 | orchestrator | 2026-04-01 04:05:30 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:05:30.234087 | orchestrator | 2026-04-01 04:05:30 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:05:30.234206 | orchestrator | 2026-04-01 04:05:30 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:05:33.285044 | orchestrator | 2026-04-01 04:05:33 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:05:33.287504 | orchestrator | 2026-04-01 04:05:33 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:05:33.287570 | orchestrator | 2026-04-01 04:05:33 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:05:36.337925 | orchestrator | 2026-04-01 04:05:36 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:05:36.339865 | orchestrator | 2026-04-01 04:05:36 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:05:36.339911 | orchestrator | 2026-04-01 04:05:36 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:05:39.389957 | orchestrator | 2026-04-01 04:05:39 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:05:39.390753 | orchestrator | 2026-04-01 04:05:39 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:05:39.390800 | orchestrator | 2026-04-01 04:05:39 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:05:42.437709 | orchestrator | 2026-04-01 04:05:42 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:05:42.439715 | orchestrator | 2026-04-01 04:05:42 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:05:42.439758 | orchestrator | 2026-04-01 04:05:42 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:05:45.487133 | orchestrator | 2026-04-01 04:05:45 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:05:45.488175 | orchestrator | 2026-04-01 04:05:45 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:05:45.488427 | orchestrator | 2026-04-01 04:05:45 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:05:48.539738 | orchestrator | 2026-04-01 04:05:48 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:05:48.541409 | orchestrator | 2026-04-01 04:05:48 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:05:48.541506 | orchestrator | 2026-04-01 04:05:48 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:05:51.584576 | orchestrator | 2026-04-01 04:05:51 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:05:51.586610 | orchestrator | 2026-04-01 04:05:51 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:05:51.586678 | orchestrator | 2026-04-01 04:05:51 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:05:54.630776 | orchestrator | 2026-04-01 04:05:54 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:05:54.632956 | orchestrator | 2026-04-01 04:05:54 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:05:54.633106 | orchestrator | 2026-04-01 04:05:54 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:05:57.686805 | orchestrator | 2026-04-01 04:05:57 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:05:57.688026 | orchestrator | 2026-04-01 04:05:57 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:05:57.688084 | orchestrator | 2026-04-01 04:05:57 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:06:00.732068 | orchestrator | 2026-04-01 04:06:00 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:06:00.732904 | orchestrator | 2026-04-01 04:06:00 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:06:00.732939 | orchestrator | 2026-04-01 04:06:00 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:06:03.773583 | orchestrator | 2026-04-01 04:06:03 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:06:03.774745 | orchestrator | 2026-04-01 04:06:03 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:06:03.774782 | orchestrator | 2026-04-01 04:06:03 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:06:06.824128 | orchestrator | 2026-04-01 04:06:06 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:06:06.825570 | orchestrator | 2026-04-01 04:06:06 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:06:06.825624 | orchestrator | 2026-04-01 04:06:06 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:06:09.875528 | orchestrator | 2026-04-01 04:06:09 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:06:09.877864 | orchestrator | 2026-04-01 04:06:09 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:06:09.877956 | orchestrator | 2026-04-01 04:06:09 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:06:12.930861 | orchestrator | 2026-04-01 04:06:12 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:06:12.932747 | orchestrator | 2026-04-01 04:06:12 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:06:12.932811 | orchestrator | 2026-04-01 04:06:12 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:06:15.975745 | orchestrator | 2026-04-01 04:06:15 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:06:15.976887 | orchestrator | 2026-04-01 04:06:15 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:06:15.976998 | orchestrator | 2026-04-01 04:06:15 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:06:19.024116 | orchestrator | 2026-04-01 04:06:19 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:06:19.026250 | orchestrator | 2026-04-01 04:06:19 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:06:19.026319 | orchestrator | 2026-04-01 04:06:19 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:06:22.072797 | orchestrator | 2026-04-01 04:06:22 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:06:22.075069 | orchestrator | 2026-04-01 04:06:22 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:06:22.075185 | orchestrator | 2026-04-01 04:06:22 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:06:25.119704 | orchestrator | 2026-04-01 04:06:25 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:06:25.122366 | orchestrator | 2026-04-01 04:06:25 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:06:25.122451 | orchestrator | 2026-04-01 04:06:25 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:06:28.171703 | orchestrator | 2026-04-01 04:06:28 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:06:28.173858 | orchestrator | 2026-04-01 04:06:28 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:06:28.173975 | orchestrator | 2026-04-01 04:06:28 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:06:31.222009 | orchestrator | 2026-04-01 04:06:31 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:06:31.224181 | orchestrator | 2026-04-01 04:06:31 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:06:31.224242 | orchestrator | 2026-04-01 04:06:31 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:06:34.269773 | orchestrator | 2026-04-01 04:06:34 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:06:34.271650 | orchestrator | 2026-04-01 04:06:34 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:06:34.271711 | orchestrator | 2026-04-01 04:06:34 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:06:37.313997 | orchestrator | 2026-04-01 04:06:37 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:06:37.315680 | orchestrator | 2026-04-01 04:06:37 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:06:37.315841 | orchestrator | 2026-04-01 04:06:37 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:06:40.350284 | orchestrator | 2026-04-01 04:06:40 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:06:40.351009 | orchestrator | 2026-04-01 04:06:40 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:06:40.351083 | orchestrator | 2026-04-01 04:06:40 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:06:43.397685 | orchestrator | 2026-04-01 04:06:43 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:06:43.399952 | orchestrator | 2026-04-01 04:06:43 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:06:43.400019 | orchestrator | 2026-04-01 04:06:43 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:06:46.451412 | orchestrator | 2026-04-01 04:06:46 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:06:46.456141 | orchestrator | 2026-04-01 04:06:46 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:06:46.456213 | orchestrator | 2026-04-01 04:06:46 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:06:49.502563 | orchestrator | 2026-04-01 04:06:49 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:06:49.504191 | orchestrator | 2026-04-01 04:06:49 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:06:49.504286 | orchestrator | 2026-04-01 04:06:49 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:06:52.551684 | orchestrator | 2026-04-01 04:06:52 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:06:52.554250 | orchestrator | 2026-04-01 04:06:52 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:06:52.554441 | orchestrator | 2026-04-01 04:06:52 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:06:55.603548 | orchestrator | 2026-04-01 04:06:55 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:06:55.605966 | orchestrator | 2026-04-01 04:06:55 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:06:55.606115 | orchestrator | 2026-04-01 04:06:55 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:06:58.660444 | orchestrator | 2026-04-01 04:06:58 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:06:58.664117 | orchestrator | 2026-04-01 04:06:58 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:06:58.664189 | orchestrator | 2026-04-01 04:06:58 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:07:01.712424 | orchestrator | 2026-04-01 04:07:01 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:07:01.715242 | orchestrator | 2026-04-01 04:07:01 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:07:01.715299 | orchestrator | 2026-04-01 04:07:01 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:07:04.768070 | orchestrator | 2026-04-01 04:07:04 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:07:04.771620 | orchestrator | 2026-04-01 04:07:04 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:07:04.771708 | orchestrator | 2026-04-01 04:07:04 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:07:07.824702 | orchestrator | 2026-04-01 04:07:07 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:07:07.827334 | orchestrator | 2026-04-01 04:07:07 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:07:07.827376 | orchestrator | 2026-04-01 04:07:07 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:07:10.877121 | orchestrator | 2026-04-01 04:07:10 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:07:10.877869 | orchestrator | 2026-04-01 04:07:10 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:07:10.877925 | orchestrator | 2026-04-01 04:07:10 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:07:13.929294 | orchestrator | 2026-04-01 04:07:13 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:07:13.931890 | orchestrator | 2026-04-01 04:07:13 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:07:13.931946 | orchestrator | 2026-04-01 04:07:13 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:07:16.977422 | orchestrator | 2026-04-01 04:07:16 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:07:16.980110 | orchestrator | 2026-04-01 04:07:16 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:07:16.980184 | orchestrator | 2026-04-01 04:07:16 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:07:20.022502 | orchestrator | 2026-04-01 04:07:20 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:07:20.024976 | orchestrator | 2026-04-01 04:07:20 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:07:20.025034 | orchestrator | 2026-04-01 04:07:20 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:07:23.069039 | orchestrator | 2026-04-01 04:07:23 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:07:23.072310 | orchestrator | 2026-04-01 04:07:23 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:07:23.072404 | orchestrator | 2026-04-01 04:07:23 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:07:26.121877 | orchestrator | 2026-04-01 04:07:26 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:07:26.123862 | orchestrator | 2026-04-01 04:07:26 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:07:26.123894 | orchestrator | 2026-04-01 04:07:26 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:07:29.172601 | orchestrator | 2026-04-01 04:07:29 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:07:29.175131 | orchestrator | 2026-04-01 04:07:29 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:07:29.175218 | orchestrator | 2026-04-01 04:07:29 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:07:32.222319 | orchestrator | 2026-04-01 04:07:32 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:07:32.224939 | orchestrator | 2026-04-01 04:07:32 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:07:32.225072 | orchestrator | 2026-04-01 04:07:32 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:07:35.275117 | orchestrator | 2026-04-01 04:07:35 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:07:35.277009 | orchestrator | 2026-04-01 04:07:35 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:07:35.277083 | orchestrator | 2026-04-01 04:07:35 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:07:38.327636 | orchestrator | 2026-04-01 04:07:38 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:07:38.329300 | orchestrator | 2026-04-01 04:07:38 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:07:38.329409 | orchestrator | 2026-04-01 04:07:38 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:07:41.380172 | orchestrator | 2026-04-01 04:07:41 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:07:41.381499 | orchestrator | 2026-04-01 04:07:41 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:07:41.381607 | orchestrator | 2026-04-01 04:07:41 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:07:44.443732 | orchestrator | 2026-04-01 04:07:44 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:07:44.445092 | orchestrator | 2026-04-01 04:07:44 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:07:44.445165 | orchestrator | 2026-04-01 04:07:44 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:07:47.487764 | orchestrator | 2026-04-01 04:07:47 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:07:47.489345 | orchestrator | 2026-04-01 04:07:47 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:07:47.489404 | orchestrator | 2026-04-01 04:07:47 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:07:50.538504 | orchestrator | 2026-04-01 04:07:50 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:07:50.539846 | orchestrator | 2026-04-01 04:07:50 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:07:50.539871 | orchestrator | 2026-04-01 04:07:50 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:07:53.586376 | orchestrator | 2026-04-01 04:07:53 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:07:53.588441 | orchestrator | 2026-04-01 04:07:53 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:07:53.588507 | orchestrator | 2026-04-01 04:07:53 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:07:56.636201 | orchestrator | 2026-04-01 04:07:56 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:07:56.638467 | orchestrator | 2026-04-01 04:07:56 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:07:56.638529 | orchestrator | 2026-04-01 04:07:56 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:07:59.694071 | orchestrator | 2026-04-01 04:07:59 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:07:59.695356 | orchestrator | 2026-04-01 04:07:59 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:07:59.695405 | orchestrator | 2026-04-01 04:07:59 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:08:02.742784 | orchestrator | 2026-04-01 04:08:02 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:08:02.743899 | orchestrator | 2026-04-01 04:08:02 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:08:02.743928 | orchestrator | 2026-04-01 04:08:02 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:08:05.795485 | orchestrator | 2026-04-01 04:08:05 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:08:05.797979 | orchestrator | 2026-04-01 04:08:05 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:08:05.798072 | orchestrator | 2026-04-01 04:08:05 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:08:08.846066 | orchestrator | 2026-04-01 04:08:08 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:08:08.848525 | orchestrator | 2026-04-01 04:08:08 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:08:08.848579 | orchestrator | 2026-04-01 04:08:08 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:08:11.908758 | orchestrator | 2026-04-01 04:08:11 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:08:11.910931 | orchestrator | 2026-04-01 04:08:11 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:08:11.910976 | orchestrator | 2026-04-01 04:08:11 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:08:14.965136 | orchestrator | 2026-04-01 04:08:14 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:08:14.968224 | orchestrator | 2026-04-01 04:08:14 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:08:14.968333 | orchestrator | 2026-04-01 04:08:14 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:08:18.015962 | orchestrator | 2026-04-01 04:08:18 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:08:18.017768 | orchestrator | 2026-04-01 04:08:18 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:08:18.017808 | orchestrator | 2026-04-01 04:08:18 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:08:21.066826 | orchestrator | 2026-04-01 04:08:21 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:08:21.068676 | orchestrator | 2026-04-01 04:08:21 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:08:21.068713 | orchestrator | 2026-04-01 04:08:21 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:08:24.118201 | orchestrator | 2026-04-01 04:08:24 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:08:24.120010 | orchestrator | 2026-04-01 04:08:24 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:08:24.120061 | orchestrator | 2026-04-01 04:08:24 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:08:27.161812 | orchestrator | 2026-04-01 04:08:27 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:08:27.163176 | orchestrator | 2026-04-01 04:08:27 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:08:27.163243 | orchestrator | 2026-04-01 04:08:27 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:08:30.207167 | orchestrator | 2026-04-01 04:08:30 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:08:30.209199 | orchestrator | 2026-04-01 04:08:30 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:08:30.209246 | orchestrator | 2026-04-01 04:08:30 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:08:33.257119 | orchestrator | 2026-04-01 04:08:33 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:08:33.261126 | orchestrator | 2026-04-01 04:08:33 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:08:33.261218 | orchestrator | 2026-04-01 04:08:33 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:08:36.312767 | orchestrator | 2026-04-01 04:08:36 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:08:36.313245 | orchestrator | 2026-04-01 04:08:36 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:08:36.313345 | orchestrator | 2026-04-01 04:08:36 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:08:39.363747 | orchestrator | 2026-04-01 04:08:39 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:08:39.365554 | orchestrator | 2026-04-01 04:08:39 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:08:39.365648 | orchestrator | 2026-04-01 04:08:39 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:08:42.413578 | orchestrator | 2026-04-01 04:08:42 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:08:42.416954 | orchestrator | 2026-04-01 04:08:42 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:08:42.417041 | orchestrator | 2026-04-01 04:08:42 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:08:45.465450 | orchestrator | 2026-04-01 04:08:45 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:08:45.466838 | orchestrator | 2026-04-01 04:08:45 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:08:45.466893 | orchestrator | 2026-04-01 04:08:45 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:08:48.517347 | orchestrator | 2026-04-01 04:08:48 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:08:48.519187 | orchestrator | 2026-04-01 04:08:48 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:08:48.519302 | orchestrator | 2026-04-01 04:08:48 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:08:51.568493 | orchestrator | 2026-04-01 04:08:51 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:08:51.570754 | orchestrator | 2026-04-01 04:08:51 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:08:51.570821 | orchestrator | 2026-04-01 04:08:51 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:08:54.619182 | orchestrator | 2026-04-01 04:08:54 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:08:54.621796 | orchestrator | 2026-04-01 04:08:54 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:08:54.622110 | orchestrator | 2026-04-01 04:08:54 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:08:57.673518 | orchestrator | 2026-04-01 04:08:57 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:08:57.676428 | orchestrator | 2026-04-01 04:08:57 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:08:57.676690 | orchestrator | 2026-04-01 04:08:57 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:09:00.726881 | orchestrator | 2026-04-01 04:09:00 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:09:00.729569 | orchestrator | 2026-04-01 04:09:00 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:09:00.729860 | orchestrator | 2026-04-01 04:09:00 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:09:03.777035 | orchestrator | 2026-04-01 04:09:03 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:09:03.778599 | orchestrator | 2026-04-01 04:09:03 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:09:03.778788 | orchestrator | 2026-04-01 04:09:03 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:09:06.828083 | orchestrator | 2026-04-01 04:09:06 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:09:06.830700 | orchestrator | 2026-04-01 04:09:06 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:09:06.830771 | orchestrator | 2026-04-01 04:09:06 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:09:09.881982 | orchestrator | 2026-04-01 04:09:09 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:09:09.882859 | orchestrator | 2026-04-01 04:09:09 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:09:09.883062 | orchestrator | 2026-04-01 04:09:09 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:09:12.932213 | orchestrator | 2026-04-01 04:09:12 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:09:12.933634 | orchestrator | 2026-04-01 04:09:12 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:09:12.934204 | orchestrator | 2026-04-01 04:09:12 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:09:15.982288 | orchestrator | 2026-04-01 04:09:15 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:09:15.984927 | orchestrator | 2026-04-01 04:09:15 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:09:15.985046 | orchestrator | 2026-04-01 04:09:15 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:09:19.037849 | orchestrator | 2026-04-01 04:09:19 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:09:19.040215 | orchestrator | 2026-04-01 04:09:19 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:09:19.040281 | orchestrator | 2026-04-01 04:09:19 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:09:22.088651 | orchestrator | 2026-04-01 04:09:22 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:09:22.090472 | orchestrator | 2026-04-01 04:09:22 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:09:22.090560 | orchestrator | 2026-04-01 04:09:22 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:09:25.130245 | orchestrator | 2026-04-01 04:09:25 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:09:25.132245 | orchestrator | 2026-04-01 04:09:25 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:09:25.132292 | orchestrator | 2026-04-01 04:09:25 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:09:28.177965 | orchestrator | 2026-04-01 04:09:28 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:09:28.179365 | orchestrator | 2026-04-01 04:09:28 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:09:28.179644 | orchestrator | 2026-04-01 04:09:28 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:09:31.232457 | orchestrator | 2026-04-01 04:09:31 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:09:31.234679 | orchestrator | 2026-04-01 04:09:31 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:09:31.234728 | orchestrator | 2026-04-01 04:09:31 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:09:34.281380 | orchestrator | 2026-04-01 04:09:34 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:09:34.283574 | orchestrator | 2026-04-01 04:09:34 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:09:34.283690 | orchestrator | 2026-04-01 04:09:34 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:09:37.339260 | orchestrator | 2026-04-01 04:09:37 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:09:37.341494 | orchestrator | 2026-04-01 04:09:37 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:09:37.342188 | orchestrator | 2026-04-01 04:09:37 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:09:40.391000 | orchestrator | 2026-04-01 04:09:40 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:09:40.393335 | orchestrator | 2026-04-01 04:09:40 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:09:40.393438 | orchestrator | 2026-04-01 04:09:40 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:09:43.440179 | orchestrator | 2026-04-01 04:09:43 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:09:43.442246 | orchestrator | 2026-04-01 04:09:43 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:09:43.442361 | orchestrator | 2026-04-01 04:09:43 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:09:46.485749 | orchestrator | 2026-04-01 04:09:46 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:09:46.487151 | orchestrator | 2026-04-01 04:09:46 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:09:46.487301 | orchestrator | 2026-04-01 04:09:46 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:09:49.534189 | orchestrator | 2026-04-01 04:09:49 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:09:49.536844 | orchestrator | 2026-04-01 04:09:49 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:09:49.536902 | orchestrator | 2026-04-01 04:09:49 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:09:52.580975 | orchestrator | 2026-04-01 04:09:52 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:09:52.583433 | orchestrator | 2026-04-01 04:09:52 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:09:52.583515 | orchestrator | 2026-04-01 04:09:52 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:09:55.637998 | orchestrator | 2026-04-01 04:09:55 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:09:55.639334 | orchestrator | 2026-04-01 04:09:55 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:09:55.639515 | orchestrator | 2026-04-01 04:09:55 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:09:58.691974 | orchestrator | 2026-04-01 04:09:58 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:09:58.694064 | orchestrator | 2026-04-01 04:09:58 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:09:58.694219 | orchestrator | 2026-04-01 04:09:58 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:10:01.739109 | orchestrator | 2026-04-01 04:10:01 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:10:01.739997 | orchestrator | 2026-04-01 04:10:01 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:10:01.740049 | orchestrator | 2026-04-01 04:10:01 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:10:04.793999 | orchestrator | 2026-04-01 04:10:04 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:10:04.795857 | orchestrator | 2026-04-01 04:10:04 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:10:04.796659 | orchestrator | 2026-04-01 04:10:04 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:10:07.839811 | orchestrator | 2026-04-01 04:10:07 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:10:07.841280 | orchestrator | 2026-04-01 04:10:07 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:10:07.841315 | orchestrator | 2026-04-01 04:10:07 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:10:10.886704 | orchestrator | 2026-04-01 04:10:10 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:10:10.887182 | orchestrator | 2026-04-01 04:10:10 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:10:10.887391 | orchestrator | 2026-04-01 04:10:10 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:10:13.931797 | orchestrator | 2026-04-01 04:10:13 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:10:13.934221 | orchestrator | 2026-04-01 04:10:13 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:10:13.934378 | orchestrator | 2026-04-01 04:10:13 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:10:16.978192 | orchestrator | 2026-04-01 04:10:16 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:10:16.979715 | orchestrator | 2026-04-01 04:10:16 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:10:16.979747 | orchestrator | 2026-04-01 04:10:16 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:10:20.029658 | orchestrator | 2026-04-01 04:10:20 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:10:20.031683 | orchestrator | 2026-04-01 04:10:20 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:10:20.031742 | orchestrator | 2026-04-01 04:10:20 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:10:23.083124 | orchestrator | 2026-04-01 04:10:23 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:10:23.085712 | orchestrator | 2026-04-01 04:10:23 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:10:23.085789 | orchestrator | 2026-04-01 04:10:23 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:10:26.135676 | orchestrator | 2026-04-01 04:10:26 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:10:26.137501 | orchestrator | 2026-04-01 04:10:26 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:10:26.137547 | orchestrator | 2026-04-01 04:10:26 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:10:29.186971 | orchestrator | 2026-04-01 04:10:29 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:10:29.187219 | orchestrator | 2026-04-01 04:10:29 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:10:29.187255 | orchestrator | 2026-04-01 04:10:29 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:10:32.233637 | orchestrator | 2026-04-01 04:10:32 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:10:32.235042 | orchestrator | 2026-04-01 04:10:32 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:10:32.235233 | orchestrator | 2026-04-01 04:10:32 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:10:35.288944 | orchestrator | 2026-04-01 04:10:35 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:10:35.290707 | orchestrator | 2026-04-01 04:10:35 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:10:35.290781 | orchestrator | 2026-04-01 04:10:35 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:10:38.336130 | orchestrator | 2026-04-01 04:10:38 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:10:38.337595 | orchestrator | 2026-04-01 04:10:38 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:10:38.337668 | orchestrator | 2026-04-01 04:10:38 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:10:41.391063 | orchestrator | 2026-04-01 04:10:41 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:10:41.391157 | orchestrator | 2026-04-01 04:10:41 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:10:41.391187 | orchestrator | 2026-04-01 04:10:41 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:10:44.430402 | orchestrator | 2026-04-01 04:10:44 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:10:44.432246 | orchestrator | 2026-04-01 04:10:44 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:10:44.432309 | orchestrator | 2026-04-01 04:10:44 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:10:47.471630 | orchestrator | 2026-04-01 04:10:47 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:10:47.472988 | orchestrator | 2026-04-01 04:10:47 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:10:47.473057 | orchestrator | 2026-04-01 04:10:47 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:10:50.516239 | orchestrator | 2026-04-01 04:10:50 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:10:50.518174 | orchestrator | 2026-04-01 04:10:50 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:10:50.518285 | orchestrator | 2026-04-01 04:10:50 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:10:53.562387 | orchestrator | 2026-04-01 04:10:53 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:10:53.563827 | orchestrator | 2026-04-01 04:10:53 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:10:53.563875 | orchestrator | 2026-04-01 04:10:53 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:10:56.614180 | orchestrator | 2026-04-01 04:10:56 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:10:56.614453 | orchestrator | 2026-04-01 04:10:56 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:10:56.615593 | orchestrator | 2026-04-01 04:10:56 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:10:59.667192 | orchestrator | 2026-04-01 04:10:59 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:10:59.669941 | orchestrator | 2026-04-01 04:10:59 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:10:59.670077 | orchestrator | 2026-04-01 04:10:59 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:11:02.712101 | orchestrator | 2026-04-01 04:11:02 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:11:02.713723 | orchestrator | 2026-04-01 04:11:02 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:11:02.713797 | orchestrator | 2026-04-01 04:11:02 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:11:05.766445 | orchestrator | 2026-04-01 04:11:05 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:11:05.767715 | orchestrator | 2026-04-01 04:11:05 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:11:05.767770 | orchestrator | 2026-04-01 04:11:05 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:11:08.826109 | orchestrator | 2026-04-01 04:11:08 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:11:08.827667 | orchestrator | 2026-04-01 04:11:08 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:11:08.827738 | orchestrator | 2026-04-01 04:11:08 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:11:11.878393 | orchestrator | 2026-04-01 04:11:11 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:11:11.880458 | orchestrator | 2026-04-01 04:11:11 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:11:11.880553 | orchestrator | 2026-04-01 04:11:11 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:11:14.924995 | orchestrator | 2026-04-01 04:11:14 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:11:14.926477 | orchestrator | 2026-04-01 04:11:14 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:11:14.926541 | orchestrator | 2026-04-01 04:11:14 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:11:17.972126 | orchestrator | 2026-04-01 04:11:17 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:11:17.973163 | orchestrator | 2026-04-01 04:11:17 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:11:17.973211 | orchestrator | 2026-04-01 04:11:17 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:11:21.022336 | orchestrator | 2026-04-01 04:11:21 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:11:21.022576 | orchestrator | 2026-04-01 04:11:21 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:11:21.022599 | orchestrator | 2026-04-01 04:11:21 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:11:24.067410 | orchestrator | 2026-04-01 04:11:24 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:11:24.068610 | orchestrator | 2026-04-01 04:11:24 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:11:24.068676 | orchestrator | 2026-04-01 04:11:24 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:11:27.120022 | orchestrator | 2026-04-01 04:11:27 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:11:27.122140 | orchestrator | 2026-04-01 04:11:27 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:11:27.122516 | orchestrator | 2026-04-01 04:11:27 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:11:30.167055 | orchestrator | 2026-04-01 04:11:30 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:11:30.169124 | orchestrator | 2026-04-01 04:11:30 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:11:30.169196 | orchestrator | 2026-04-01 04:11:30 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:11:33.219253 | orchestrator | 2026-04-01 04:11:33 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:11:33.220675 | orchestrator | 2026-04-01 04:11:33 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:11:33.220733 | orchestrator | 2026-04-01 04:11:33 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:11:36.271805 | orchestrator | 2026-04-01 04:11:36 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:11:36.273506 | orchestrator | 2026-04-01 04:11:36 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:11:36.273541 | orchestrator | 2026-04-01 04:11:36 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:11:39.326916 | orchestrator | 2026-04-01 04:11:39 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:11:39.327735 | orchestrator | 2026-04-01 04:11:39 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:11:39.327850 | orchestrator | 2026-04-01 04:11:39 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:11:42.375811 | orchestrator | 2026-04-01 04:11:42 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:11:42.377248 | orchestrator | 2026-04-01 04:11:42 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:11:42.377317 | orchestrator | 2026-04-01 04:11:42 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:11:45.427329 | orchestrator | 2026-04-01 04:11:45 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:11:45.429387 | orchestrator | 2026-04-01 04:11:45 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:11:45.429493 | orchestrator | 2026-04-01 04:11:45 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:11:48.478266 | orchestrator | 2026-04-01 04:11:48 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:11:48.479810 | orchestrator | 2026-04-01 04:11:48 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:11:48.479898 | orchestrator | 2026-04-01 04:11:48 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:11:51.527738 | orchestrator | 2026-04-01 04:11:51 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:11:51.529198 | orchestrator | 2026-04-01 04:11:51 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:11:51.529240 | orchestrator | 2026-04-01 04:11:51 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:11:54.578362 | orchestrator | 2026-04-01 04:11:54 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:11:54.581354 | orchestrator | 2026-04-01 04:11:54 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:11:54.581442 | orchestrator | 2026-04-01 04:11:54 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:11:57.631153 | orchestrator | 2026-04-01 04:11:57 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:11:57.632103 | orchestrator | 2026-04-01 04:11:57 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:11:57.632142 | orchestrator | 2026-04-01 04:11:57 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:12:00.675933 | orchestrator | 2026-04-01 04:12:00 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:12:00.677645 | orchestrator | 2026-04-01 04:12:00 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:12:00.677698 | orchestrator | 2026-04-01 04:12:00 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:12:03.718264 | orchestrator | 2026-04-01 04:12:03 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:12:03.719066 | orchestrator | 2026-04-01 04:12:03 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:12:03.719126 | orchestrator | 2026-04-01 04:12:03 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:12:06.762207 | orchestrator | 2026-04-01 04:12:06 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:12:06.763845 | orchestrator | 2026-04-01 04:12:06 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:12:06.763891 | orchestrator | 2026-04-01 04:12:06 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:12:09.812866 | orchestrator | 2026-04-01 04:12:09 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:12:09.814373 | orchestrator | 2026-04-01 04:12:09 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:12:09.814643 | orchestrator | 2026-04-01 04:12:09 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:12:12.855492 | orchestrator | 2026-04-01 04:12:12 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:12:12.857848 | orchestrator | 2026-04-01 04:12:12 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:12:12.858156 | orchestrator | 2026-04-01 04:12:12 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:12:15.914542 | orchestrator | 2026-04-01 04:12:15 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:12:15.915663 | orchestrator | 2026-04-01 04:12:15 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:12:15.915715 | orchestrator | 2026-04-01 04:12:15 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:12:18.960527 | orchestrator | 2026-04-01 04:12:18 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:12:18.961384 | orchestrator | 2026-04-01 04:12:18 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:12:18.961420 | orchestrator | 2026-04-01 04:12:18 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:12:21.999653 | orchestrator | 2026-04-01 04:12:21 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:12:22.001449 | orchestrator | 2026-04-01 04:12:21 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:12:22.001541 | orchestrator | 2026-04-01 04:12:21 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:12:25.047644 | orchestrator | 2026-04-01 04:12:25 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:12:25.049588 | orchestrator | 2026-04-01 04:12:25 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:12:25.049734 | orchestrator | 2026-04-01 04:12:25 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:12:28.118485 | orchestrator | 2026-04-01 04:12:28 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:12:28.120693 | orchestrator | 2026-04-01 04:12:28 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:12:28.120817 | orchestrator | 2026-04-01 04:12:28 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:12:31.167036 | orchestrator | 2026-04-01 04:12:31 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:12:31.168932 | orchestrator | 2026-04-01 04:12:31 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:12:31.168967 | orchestrator | 2026-04-01 04:12:31 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:12:34.215409 | orchestrator | 2026-04-01 04:12:34 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:12:34.215949 | orchestrator | 2026-04-01 04:12:34 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:12:34.215991 | orchestrator | 2026-04-01 04:12:34 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:12:37.263236 | orchestrator | 2026-04-01 04:12:37 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:12:37.263350 | orchestrator | 2026-04-01 04:12:37 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:12:37.263369 | orchestrator | 2026-04-01 04:12:37 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:12:40.317306 | orchestrator | 2026-04-01 04:12:40 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:12:40.320163 | orchestrator | 2026-04-01 04:12:40 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:12:40.320269 | orchestrator | 2026-04-01 04:12:40 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:12:43.367485 | orchestrator | 2026-04-01 04:12:43 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:12:43.368316 | orchestrator | 2026-04-01 04:12:43 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:12:43.368369 | orchestrator | 2026-04-01 04:12:43 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:12:46.417807 | orchestrator | 2026-04-01 04:12:46 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:12:46.419254 | orchestrator | 2026-04-01 04:12:46 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:12:46.419313 | orchestrator | 2026-04-01 04:12:46 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:12:49.468743 | orchestrator | 2026-04-01 04:12:49 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:12:49.470374 | orchestrator | 2026-04-01 04:12:49 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:12:49.470436 | orchestrator | 2026-04-01 04:12:49 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:12:52.512527 | orchestrator | 2026-04-01 04:12:52 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:12:52.514454 | orchestrator | 2026-04-01 04:12:52 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:12:52.514503 | orchestrator | 2026-04-01 04:12:52 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:12:55.563351 | orchestrator | 2026-04-01 04:12:55 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:12:55.563445 | orchestrator | 2026-04-01 04:12:55 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:12:55.563454 | orchestrator | 2026-04-01 04:12:55 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:12:58.611978 | orchestrator | 2026-04-01 04:12:58 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:12:58.615355 | orchestrator | 2026-04-01 04:12:58 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:12:58.615461 | orchestrator | 2026-04-01 04:12:58 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:13:01.662733 | orchestrator | 2026-04-01 04:13:01 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:13:01.665183 | orchestrator | 2026-04-01 04:13:01 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:13:01.665332 | orchestrator | 2026-04-01 04:13:01 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:13:04.710754 | orchestrator | 2026-04-01 04:13:04 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:13:04.712257 | orchestrator | 2026-04-01 04:13:04 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:13:04.712333 | orchestrator | 2026-04-01 04:13:04 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:13:07.759792 | orchestrator | 2026-04-01 04:13:07 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:13:07.762082 | orchestrator | 2026-04-01 04:13:07 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:13:07.762205 | orchestrator | 2026-04-01 04:13:07 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:13:10.808588 | orchestrator | 2026-04-01 04:13:10 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:13:10.809336 | orchestrator | 2026-04-01 04:13:10 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:13:10.809417 | orchestrator | 2026-04-01 04:13:10 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:13:13.855081 | orchestrator | 2026-04-01 04:13:13 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:13:13.855468 | orchestrator | 2026-04-01 04:13:13 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:13:13.855514 | orchestrator | 2026-04-01 04:13:13 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:13:16.899696 | orchestrator | 2026-04-01 04:13:16 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:13:16.902486 | orchestrator | 2026-04-01 04:13:16 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:13:16.902564 | orchestrator | 2026-04-01 04:13:16 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:13:19.951015 | orchestrator | 2026-04-01 04:13:19 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:13:19.952545 | orchestrator | 2026-04-01 04:13:19 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:13:19.952599 | orchestrator | 2026-04-01 04:13:19 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:13:23.006460 | orchestrator | 2026-04-01 04:13:23 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:13:23.009031 | orchestrator | 2026-04-01 04:13:23 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:13:23.009255 | orchestrator | 2026-04-01 04:13:23 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:13:26.061585 | orchestrator | 2026-04-01 04:13:26 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:13:26.062864 | orchestrator | 2026-04-01 04:13:26 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:13:26.063245 | orchestrator | 2026-04-01 04:13:26 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:13:29.117551 | orchestrator | 2026-04-01 04:13:29 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:13:29.119407 | orchestrator | 2026-04-01 04:13:29 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:13:29.119452 | orchestrator | 2026-04-01 04:13:29 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:13:32.170343 | orchestrator | 2026-04-01 04:13:32 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:13:32.172079 | orchestrator | 2026-04-01 04:13:32 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:13:32.172198 | orchestrator | 2026-04-01 04:13:32 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:13:35.222953 | orchestrator | 2026-04-01 04:13:35 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:13:35.223883 | orchestrator | 2026-04-01 04:13:35 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:13:35.224026 | orchestrator | 2026-04-01 04:13:35 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:13:38.276919 | orchestrator | 2026-04-01 04:13:38 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:13:38.279861 | orchestrator | 2026-04-01 04:13:38 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:13:38.279921 | orchestrator | 2026-04-01 04:13:38 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:13:41.329695 | orchestrator | 2026-04-01 04:13:41 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:13:41.330815 | orchestrator | 2026-04-01 04:13:41 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:13:41.330852 | orchestrator | 2026-04-01 04:13:41 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:13:44.378738 | orchestrator | 2026-04-01 04:13:44 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:13:44.382598 | orchestrator | 2026-04-01 04:13:44 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:13:44.382698 | orchestrator | 2026-04-01 04:13:44 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:13:47.429641 | orchestrator | 2026-04-01 04:13:47 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:13:47.431464 | orchestrator | 2026-04-01 04:13:47 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:13:47.431555 | orchestrator | 2026-04-01 04:13:47 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:13:50.481436 | orchestrator | 2026-04-01 04:13:50 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:13:50.483443 | orchestrator | 2026-04-01 04:13:50 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:13:50.483474 | orchestrator | 2026-04-01 04:13:50 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:13:53.540256 | orchestrator | 2026-04-01 04:13:53 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:13:53.541201 | orchestrator | 2026-04-01 04:13:53 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:13:53.541386 | orchestrator | 2026-04-01 04:13:53 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:13:56.598564 | orchestrator | 2026-04-01 04:13:56 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:13:56.601906 | orchestrator | 2026-04-01 04:13:56 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:13:56.601995 | orchestrator | 2026-04-01 04:13:56 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:13:59.657357 | orchestrator | 2026-04-01 04:13:59 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:13:59.659113 | orchestrator | 2026-04-01 04:13:59 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:13:59.659291 | orchestrator | 2026-04-01 04:13:59 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:14:02.704288 | orchestrator | 2026-04-01 04:14:02 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:14:02.706154 | orchestrator | 2026-04-01 04:14:02 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:14:02.706265 | orchestrator | 2026-04-01 04:14:02 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:14:05.766501 | orchestrator | 2026-04-01 04:14:05 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:14:05.767493 | orchestrator | 2026-04-01 04:14:05 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:14:05.767509 | orchestrator | 2026-04-01 04:14:05 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:14:08.819593 | orchestrator | 2026-04-01 04:14:08 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:14:08.821936 | orchestrator | 2026-04-01 04:14:08 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:14:08.822106 | orchestrator | 2026-04-01 04:14:08 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:14:11.877937 | orchestrator | 2026-04-01 04:14:11 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:14:11.880444 | orchestrator | 2026-04-01 04:14:11 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:14:11.880510 | orchestrator | 2026-04-01 04:14:11 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:14:14.937768 | orchestrator | 2026-04-01 04:14:14 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:14:14.938414 | orchestrator | 2026-04-01 04:14:14 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:14:14.938445 | orchestrator | 2026-04-01 04:14:14 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:14:17.987287 | orchestrator | 2026-04-01 04:14:17 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:14:17.990513 | orchestrator | 2026-04-01 04:14:17 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:14:17.990577 | orchestrator | 2026-04-01 04:14:17 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:14:21.042775 | orchestrator | 2026-04-01 04:14:21 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:14:21.046495 | orchestrator | 2026-04-01 04:14:21 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:14:21.046596 | orchestrator | 2026-04-01 04:14:21 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:14:24.096729 | orchestrator | 2026-04-01 04:14:24 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:14:24.099425 | orchestrator | 2026-04-01 04:14:24 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:14:24.099468 | orchestrator | 2026-04-01 04:14:24 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:14:27.152528 | orchestrator | 2026-04-01 04:14:27 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:14:27.155010 | orchestrator | 2026-04-01 04:14:27 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:14:27.155050 | orchestrator | 2026-04-01 04:14:27 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:14:30.203316 | orchestrator | 2026-04-01 04:14:30 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:14:30.216049 | orchestrator | 2026-04-01 04:14:30 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:14:30.216161 | orchestrator | 2026-04-01 04:14:30 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:14:33.262762 | orchestrator | 2026-04-01 04:14:33 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:14:33.265082 | orchestrator | 2026-04-01 04:14:33 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:14:33.265149 | orchestrator | 2026-04-01 04:14:33 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:14:36.308453 | orchestrator | 2026-04-01 04:14:36 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:14:36.310106 | orchestrator | 2026-04-01 04:14:36 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:14:36.310144 | orchestrator | 2026-04-01 04:14:36 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:14:39.358578 | orchestrator | 2026-04-01 04:14:39 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:14:39.361610 | orchestrator | 2026-04-01 04:14:39 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:14:39.361669 | orchestrator | 2026-04-01 04:14:39 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:14:42.408090 | orchestrator | 2026-04-01 04:14:42 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:14:42.410452 | orchestrator | 2026-04-01 04:14:42 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:14:42.410549 | orchestrator | 2026-04-01 04:14:42 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:14:45.458704 | orchestrator | 2026-04-01 04:14:45 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:14:45.459651 | orchestrator | 2026-04-01 04:14:45 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:14:45.459668 | orchestrator | 2026-04-01 04:14:45 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:14:48.508834 | orchestrator | 2026-04-01 04:14:48 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:14:48.511582 | orchestrator | 2026-04-01 04:14:48 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:14:48.511739 | orchestrator | 2026-04-01 04:14:48 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:14:51.555397 | orchestrator | 2026-04-01 04:14:51 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:14:51.556825 | orchestrator | 2026-04-01 04:14:51 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:14:51.556942 | orchestrator | 2026-04-01 04:14:51 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:14:54.601717 | orchestrator | 2026-04-01 04:14:54 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:14:54.603467 | orchestrator | 2026-04-01 04:14:54 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:14:54.603523 | orchestrator | 2026-04-01 04:14:54 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:14:57.653901 | orchestrator | 2026-04-01 04:14:57 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:14:57.656541 | orchestrator | 2026-04-01 04:14:57 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:14:57.656738 | orchestrator | 2026-04-01 04:14:57 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:15:00.708346 | orchestrator | 2026-04-01 04:15:00 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:15:00.710838 | orchestrator | 2026-04-01 04:15:00 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:15:00.710896 | orchestrator | 2026-04-01 04:15:00 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:15:03.756672 | orchestrator | 2026-04-01 04:15:03 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:15:03.758798 | orchestrator | 2026-04-01 04:15:03 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:15:03.758842 | orchestrator | 2026-04-01 04:15:03 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:15:06.815799 | orchestrator | 2026-04-01 04:15:06 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:15:06.817771 | orchestrator | 2026-04-01 04:15:06 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:15:06.818140 | orchestrator | 2026-04-01 04:15:06 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:15:09.870260 | orchestrator | 2026-04-01 04:15:09 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:15:09.872007 | orchestrator | 2026-04-01 04:15:09 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:15:09.872197 | orchestrator | 2026-04-01 04:15:09 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:15:12.921256 | orchestrator | 2026-04-01 04:15:12 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:15:12.922412 | orchestrator | 2026-04-01 04:15:12 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:15:12.922468 | orchestrator | 2026-04-01 04:15:12 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:15:15.967401 | orchestrator | 2026-04-01 04:15:15 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:15:15.969078 | orchestrator | 2026-04-01 04:15:15 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:15:15.969151 | orchestrator | 2026-04-01 04:15:15 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:15:19.013252 | orchestrator | 2026-04-01 04:15:19 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:15:19.015184 | orchestrator | 2026-04-01 04:15:19 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:15:19.015255 | orchestrator | 2026-04-01 04:15:19 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:15:22.067979 | orchestrator | 2026-04-01 04:15:22 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:15:22.070166 | orchestrator | 2026-04-01 04:15:22 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:15:22.070236 | orchestrator | 2026-04-01 04:15:22 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:15:25.123835 | orchestrator | 2026-04-01 04:15:25 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:15:25.124978 | orchestrator | 2026-04-01 04:15:25 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:15:25.125040 | orchestrator | 2026-04-01 04:15:25 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:15:28.176825 | orchestrator | 2026-04-01 04:15:28 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:15:28.179591 | orchestrator | 2026-04-01 04:15:28 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:15:28.179671 | orchestrator | 2026-04-01 04:15:28 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:15:31.224146 | orchestrator | 2026-04-01 04:15:31 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:15:31.226529 | orchestrator | 2026-04-01 04:15:31 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:15:31.226639 | orchestrator | 2026-04-01 04:15:31 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:15:34.270688 | orchestrator | 2026-04-01 04:15:34 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:15:34.272441 | orchestrator | 2026-04-01 04:15:34 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:15:34.272483 | orchestrator | 2026-04-01 04:15:34 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:15:37.325908 | orchestrator | 2026-04-01 04:15:37 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:15:37.328059 | orchestrator | 2026-04-01 04:15:37 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:15:37.328134 | orchestrator | 2026-04-01 04:15:37 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:15:40.367495 | orchestrator | 2026-04-01 04:15:40 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:15:40.369760 | orchestrator | 2026-04-01 04:15:40 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:15:40.369887 | orchestrator | 2026-04-01 04:15:40 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:15:43.417646 | orchestrator | 2026-04-01 04:15:43 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:15:43.419195 | orchestrator | 2026-04-01 04:15:43 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:15:43.419219 | orchestrator | 2026-04-01 04:15:43 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:15:46.464486 | orchestrator | 2026-04-01 04:15:46 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:15:46.466159 | orchestrator | 2026-04-01 04:15:46 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:15:46.466232 | orchestrator | 2026-04-01 04:15:46 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:15:49.520818 | orchestrator | 2026-04-01 04:15:49 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:15:49.522730 | orchestrator | 2026-04-01 04:15:49 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:15:49.522774 | orchestrator | 2026-04-01 04:15:49 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:15:52.575482 | orchestrator | 2026-04-01 04:15:52 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:15:52.576871 | orchestrator | 2026-04-01 04:15:52 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:15:52.576926 | orchestrator | 2026-04-01 04:15:52 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:15:55.624892 | orchestrator | 2026-04-01 04:15:55 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:15:55.628488 | orchestrator | 2026-04-01 04:15:55 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:15:55.628545 | orchestrator | 2026-04-01 04:15:55 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:15:58.682241 | orchestrator | 2026-04-01 04:15:58 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:15:58.684543 | orchestrator | 2026-04-01 04:15:58 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:15:58.684725 | orchestrator | 2026-04-01 04:15:58 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:16:01.736739 | orchestrator | 2026-04-01 04:16:01 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:16:01.737931 | orchestrator | 2026-04-01 04:16:01 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:16:01.738558 | orchestrator | 2026-04-01 04:16:01 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:16:04.789160 | orchestrator | 2026-04-01 04:16:04 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:16:04.790113 | orchestrator | 2026-04-01 04:16:04 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:16:04.790157 | orchestrator | 2026-04-01 04:16:04 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:16:07.837564 | orchestrator | 2026-04-01 04:16:07 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:16:07.841843 | orchestrator | 2026-04-01 04:16:07 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:16:07.842107 | orchestrator | 2026-04-01 04:16:07 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:16:10.885347 | orchestrator | 2026-04-01 04:16:10 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:16:10.887033 | orchestrator | 2026-04-01 04:16:10 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:16:10.887103 | orchestrator | 2026-04-01 04:16:10 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:16:13.937089 | orchestrator | 2026-04-01 04:16:13 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:16:13.938146 | orchestrator | 2026-04-01 04:16:13 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:16:13.938207 | orchestrator | 2026-04-01 04:16:13 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:16:16.980528 | orchestrator | 2026-04-01 04:16:16 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:16:16.982152 | orchestrator | 2026-04-01 04:16:16 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:16:16.982203 | orchestrator | 2026-04-01 04:16:16 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:16:20.031405 | orchestrator | 2026-04-01 04:16:20 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:16:20.033246 | orchestrator | 2026-04-01 04:16:20 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:16:20.033293 | orchestrator | 2026-04-01 04:16:20 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:16:23.074504 | orchestrator | 2026-04-01 04:16:23 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:16:23.075847 | orchestrator | 2026-04-01 04:16:23 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:16:23.075963 | orchestrator | 2026-04-01 04:16:23 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:16:26.119148 | orchestrator | 2026-04-01 04:16:26 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:16:26.119544 | orchestrator | 2026-04-01 04:16:26 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:16:26.119693 | orchestrator | 2026-04-01 04:16:26 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:16:29.174341 | orchestrator | 2026-04-01 04:16:29 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:16:29.176884 | orchestrator | 2026-04-01 04:16:29 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:16:29.176921 | orchestrator | 2026-04-01 04:16:29 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:16:32.221581 | orchestrator | 2026-04-01 04:16:32 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:16:32.223456 | orchestrator | 2026-04-01 04:16:32 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:16:32.223512 | orchestrator | 2026-04-01 04:16:32 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:16:35.271010 | orchestrator | 2026-04-01 04:16:35 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:16:35.273383 | orchestrator | 2026-04-01 04:16:35 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:16:35.273480 | orchestrator | 2026-04-01 04:16:35 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:16:38.321844 | orchestrator | 2026-04-01 04:16:38 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:16:38.322513 | orchestrator | 2026-04-01 04:16:38 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:16:38.322568 | orchestrator | 2026-04-01 04:16:38 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:16:41.373083 | orchestrator | 2026-04-01 04:16:41 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:16:41.375632 | orchestrator | 2026-04-01 04:16:41 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:16:41.375704 | orchestrator | 2026-04-01 04:16:41 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:16:44.425489 | orchestrator | 2026-04-01 04:16:44 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:16:44.426641 | orchestrator | 2026-04-01 04:16:44 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:16:44.426693 | orchestrator | 2026-04-01 04:16:44 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:16:47.470257 | orchestrator | 2026-04-01 04:16:47 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:16:47.471387 | orchestrator | 2026-04-01 04:16:47 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:16:47.471569 | orchestrator | 2026-04-01 04:16:47 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:16:50.520184 | orchestrator | 2026-04-01 04:16:50 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:16:50.522090 | orchestrator | 2026-04-01 04:16:50 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:16:50.522129 | orchestrator | 2026-04-01 04:16:50 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:16:53.568388 | orchestrator | 2026-04-01 04:16:53 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:16:53.570723 | orchestrator | 2026-04-01 04:16:53 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:16:53.570857 | orchestrator | 2026-04-01 04:16:53 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:16:56.617862 | orchestrator | 2026-04-01 04:16:56 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:16:56.620507 | orchestrator | 2026-04-01 04:16:56 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:16:56.620858 | orchestrator | 2026-04-01 04:16:56 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:16:59.670745 | orchestrator | 2026-04-01 04:16:59 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:16:59.672283 | orchestrator | 2026-04-01 04:16:59 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:16:59.672378 | orchestrator | 2026-04-01 04:16:59 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:17:02.724891 | orchestrator | 2026-04-01 04:17:02 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:17:02.726611 | orchestrator | 2026-04-01 04:17:02 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:17:02.726690 | orchestrator | 2026-04-01 04:17:02 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:17:05.778983 | orchestrator | 2026-04-01 04:17:05 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:17:05.780837 | orchestrator | 2026-04-01 04:17:05 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:17:05.780941 | orchestrator | 2026-04-01 04:17:05 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:17:08.832513 | orchestrator | 2026-04-01 04:17:08 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:17:08.835973 | orchestrator | 2026-04-01 04:17:08 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:17:08.836054 | orchestrator | 2026-04-01 04:17:08 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:17:11.887961 | orchestrator | 2026-04-01 04:17:11 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:17:11.891309 | orchestrator | 2026-04-01 04:17:11 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:17:11.891422 | orchestrator | 2026-04-01 04:17:11 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:17:14.942215 | orchestrator | 2026-04-01 04:17:14 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:17:14.945865 | orchestrator | 2026-04-01 04:17:14 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:17:14.945935 | orchestrator | 2026-04-01 04:17:14 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:17:17.998527 | orchestrator | 2026-04-01 04:17:17 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:17:18.002232 | orchestrator | 2026-04-01 04:17:17 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:17:18.002282 | orchestrator | 2026-04-01 04:17:17 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:17:21.046418 | orchestrator | 2026-04-01 04:17:21 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:17:21.048713 | orchestrator | 2026-04-01 04:17:21 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:17:21.048781 | orchestrator | 2026-04-01 04:17:21 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:17:24.089363 | orchestrator | 2026-04-01 04:17:24 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:17:24.091324 | orchestrator | 2026-04-01 04:17:24 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:17:24.091378 | orchestrator | 2026-04-01 04:17:24 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:17:27.143697 | orchestrator | 2026-04-01 04:17:27 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:17:27.145289 | orchestrator | 2026-04-01 04:17:27 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:17:27.145431 | orchestrator | 2026-04-01 04:17:27 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:17:30.194244 | orchestrator | 2026-04-01 04:17:30 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:17:30.196723 | orchestrator | 2026-04-01 04:17:30 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:17:30.196765 | orchestrator | 2026-04-01 04:17:30 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:17:33.243176 | orchestrator | 2026-04-01 04:17:33 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:17:33.244337 | orchestrator | 2026-04-01 04:17:33 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:17:33.244387 | orchestrator | 2026-04-01 04:17:33 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:17:36.294886 | orchestrator | 2026-04-01 04:17:36 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:17:36.296811 | orchestrator | 2026-04-01 04:17:36 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:17:36.296857 | orchestrator | 2026-04-01 04:17:36 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:17:39.343112 | orchestrator | 2026-04-01 04:17:39 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:17:39.344859 | orchestrator | 2026-04-01 04:17:39 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:17:39.344922 | orchestrator | 2026-04-01 04:17:39 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:17:42.390675 | orchestrator | 2026-04-01 04:17:42 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:17:42.392505 | orchestrator | 2026-04-01 04:17:42 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:17:42.392600 | orchestrator | 2026-04-01 04:17:42 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:17:45.442262 | orchestrator | 2026-04-01 04:17:45 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:17:45.444212 | orchestrator | 2026-04-01 04:17:45 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:17:45.444260 | orchestrator | 2026-04-01 04:17:45 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:17:48.491988 | orchestrator | 2026-04-01 04:17:48 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:17:48.493364 | orchestrator | 2026-04-01 04:17:48 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:17:48.493432 | orchestrator | 2026-04-01 04:17:48 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:17:51.538856 | orchestrator | 2026-04-01 04:17:51 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:17:51.540057 | orchestrator | 2026-04-01 04:17:51 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:17:51.540193 | orchestrator | 2026-04-01 04:17:51 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:17:54.583170 | orchestrator | 2026-04-01 04:17:54 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:17:54.584309 | orchestrator | 2026-04-01 04:17:54 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:17:54.584417 | orchestrator | 2026-04-01 04:17:54 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:17:57.624209 | orchestrator | 2026-04-01 04:17:57 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:17:57.626713 | orchestrator | 2026-04-01 04:17:57 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:17:57.626779 | orchestrator | 2026-04-01 04:17:57 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:18:00.676468 | orchestrator | 2026-04-01 04:18:00 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:18:00.678091 | orchestrator | 2026-04-01 04:18:00 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:18:00.678126 | orchestrator | 2026-04-01 04:18:00 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:18:03.728671 | orchestrator | 2026-04-01 04:18:03 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:18:03.733049 | orchestrator | 2026-04-01 04:18:03 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:18:03.733131 | orchestrator | 2026-04-01 04:18:03 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:18:06.777845 | orchestrator | 2026-04-01 04:18:06 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:18:06.779315 | orchestrator | 2026-04-01 04:18:06 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:18:06.779444 | orchestrator | 2026-04-01 04:18:06 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:18:09.830950 | orchestrator | 2026-04-01 04:18:09 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:18:09.832794 | orchestrator | 2026-04-01 04:18:09 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:18:09.832859 | orchestrator | 2026-04-01 04:18:09 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:18:12.884346 | orchestrator | 2026-04-01 04:18:12 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:18:12.885663 | orchestrator | 2026-04-01 04:18:12 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:18:12.885725 | orchestrator | 2026-04-01 04:18:12 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:18:15.929201 | orchestrator | 2026-04-01 04:18:15 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:18:15.930187 | orchestrator | 2026-04-01 04:18:15 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:18:15.930318 | orchestrator | 2026-04-01 04:18:15 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:18:18.983312 | orchestrator | 2026-04-01 04:18:18 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:18:18.984945 | orchestrator | 2026-04-01 04:18:18 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:18:18.984994 | orchestrator | 2026-04-01 04:18:18 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:18:22.038881 | orchestrator | 2026-04-01 04:18:22 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:18:22.040481 | orchestrator | 2026-04-01 04:18:22 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:18:22.040632 | orchestrator | 2026-04-01 04:18:22 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:18:25.086218 | orchestrator | 2026-04-01 04:18:25 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:18:25.088034 | orchestrator | 2026-04-01 04:18:25 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:18:25.088070 | orchestrator | 2026-04-01 04:18:25 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:18:28.135888 | orchestrator | 2026-04-01 04:18:28 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:18:28.136097 | orchestrator | 2026-04-01 04:18:28 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:18:28.136117 | orchestrator | 2026-04-01 04:18:28 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:18:31.187800 | orchestrator | 2026-04-01 04:18:31 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:18:31.189128 | orchestrator | 2026-04-01 04:18:31 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:18:31.189213 | orchestrator | 2026-04-01 04:18:31 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:18:34.241776 | orchestrator | 2026-04-01 04:18:34 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:18:34.243449 | orchestrator | 2026-04-01 04:18:34 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:18:34.243523 | orchestrator | 2026-04-01 04:18:34 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:18:37.285137 | orchestrator | 2026-04-01 04:18:37 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:18:37.287255 | orchestrator | 2026-04-01 04:18:37 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:18:37.287320 | orchestrator | 2026-04-01 04:18:37 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:18:40.337167 | orchestrator | 2026-04-01 04:18:40 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:18:40.338794 | orchestrator | 2026-04-01 04:18:40 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:18:40.338872 | orchestrator | 2026-04-01 04:18:40 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:18:43.389178 | orchestrator | 2026-04-01 04:18:43 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:18:43.390089 | orchestrator | 2026-04-01 04:18:43 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:18:43.390141 | orchestrator | 2026-04-01 04:18:43 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:18:46.456073 | orchestrator | 2026-04-01 04:18:46 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:18:46.456540 | orchestrator | 2026-04-01 04:18:46 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:18:46.456716 | orchestrator | 2026-04-01 04:18:46 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:18:49.499849 | orchestrator | 2026-04-01 04:18:49 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:18:49.501930 | orchestrator | 2026-04-01 04:18:49 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:18:49.501990 | orchestrator | 2026-04-01 04:18:49 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:18:52.546393 | orchestrator | 2026-04-01 04:18:52 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:18:52.549033 | orchestrator | 2026-04-01 04:18:52 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:18:52.549085 | orchestrator | 2026-04-01 04:18:52 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:18:55.597269 | orchestrator | 2026-04-01 04:18:55 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:18:55.599335 | orchestrator | 2026-04-01 04:18:55 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:18:55.599392 | orchestrator | 2026-04-01 04:18:55 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:18:58.651225 | orchestrator | 2026-04-01 04:18:58 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:18:58.653182 | orchestrator | 2026-04-01 04:18:58 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:18:58.653274 | orchestrator | 2026-04-01 04:18:58 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:19:01.704716 | orchestrator | 2026-04-01 04:19:01 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:19:01.705600 | orchestrator | 2026-04-01 04:19:01 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:19:01.705847 | orchestrator | 2026-04-01 04:19:01 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:19:04.751171 | orchestrator | 2026-04-01 04:19:04 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:19:04.758373 | orchestrator | 2026-04-01 04:19:04 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:19:04.758468 | orchestrator | 2026-04-01 04:19:04 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:19:07.802319 | orchestrator | 2026-04-01 04:19:07 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:19:07.804478 | orchestrator | 2026-04-01 04:19:07 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:19:07.804668 | orchestrator | 2026-04-01 04:19:07 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:19:10.836952 | orchestrator | 2026-04-01 04:19:10 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:19:10.839330 | orchestrator | 2026-04-01 04:19:10 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:19:10.839356 | orchestrator | 2026-04-01 04:19:10 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:19:13.883347 | orchestrator | 2026-04-01 04:19:13 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:19:13.884881 | orchestrator | 2026-04-01 04:19:13 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:19:13.884926 | orchestrator | 2026-04-01 04:19:13 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:19:16.931170 | orchestrator | 2026-04-01 04:19:16 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:19:16.932749 | orchestrator | 2026-04-01 04:19:16 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:19:16.932828 | orchestrator | 2026-04-01 04:19:16 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:19:19.976833 | orchestrator | 2026-04-01 04:19:19 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:19:19.978358 | orchestrator | 2026-04-01 04:19:19 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:19:19.978411 | orchestrator | 2026-04-01 04:19:19 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:19:23.033629 | orchestrator | 2026-04-01 04:19:23 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:19:23.035785 | orchestrator | 2026-04-01 04:19:23 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:19:23.035822 | orchestrator | 2026-04-01 04:19:23 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:19:26.082622 | orchestrator | 2026-04-01 04:19:26 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:19:26.084093 | orchestrator | 2026-04-01 04:19:26 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:19:26.084174 | orchestrator | 2026-04-01 04:19:26 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:19:29.132695 | orchestrator | 2026-04-01 04:19:29 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:19:29.136043 | orchestrator | 2026-04-01 04:19:29 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:19:29.136114 | orchestrator | 2026-04-01 04:19:29 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:19:32.185008 | orchestrator | 2026-04-01 04:19:32 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:19:32.186250 | orchestrator | 2026-04-01 04:19:32 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:19:32.186388 | orchestrator | 2026-04-01 04:19:32 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:19:35.232104 | orchestrator | 2026-04-01 04:19:35 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:19:35.233030 | orchestrator | 2026-04-01 04:19:35 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:19:35.233089 | orchestrator | 2026-04-01 04:19:35 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:19:38.274875 | orchestrator | 2026-04-01 04:19:38 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:19:38.276876 | orchestrator | 2026-04-01 04:19:38 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:19:38.276910 | orchestrator | 2026-04-01 04:19:38 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:19:41.331617 | orchestrator | 2026-04-01 04:19:41 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:19:41.332823 | orchestrator | 2026-04-01 04:19:41 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:19:41.332857 | orchestrator | 2026-04-01 04:19:41 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:19:44.374883 | orchestrator | 2026-04-01 04:19:44 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:19:44.376187 | orchestrator | 2026-04-01 04:19:44 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:19:44.376244 | orchestrator | 2026-04-01 04:19:44 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:19:47.423608 | orchestrator | 2026-04-01 04:19:47 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:19:47.426228 | orchestrator | 2026-04-01 04:19:47 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:19:47.426308 | orchestrator | 2026-04-01 04:19:47 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:19:50.470307 | orchestrator | 2026-04-01 04:19:50 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:19:50.471214 | orchestrator | 2026-04-01 04:19:50 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:19:50.471260 | orchestrator | 2026-04-01 04:19:50 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:19:53.517670 | orchestrator | 2026-04-01 04:19:53 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:19:53.520210 | orchestrator | 2026-04-01 04:19:53 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:19:53.520286 | orchestrator | 2026-04-01 04:19:53 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:19:56.565647 | orchestrator | 2026-04-01 04:19:56 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:19:56.568553 | orchestrator | 2026-04-01 04:19:56 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:19:56.568622 | orchestrator | 2026-04-01 04:19:56 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:19:59.620823 | orchestrator | 2026-04-01 04:19:59 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:19:59.621710 | orchestrator | 2026-04-01 04:19:59 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:19:59.621849 | orchestrator | 2026-04-01 04:19:59 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:20:02.675768 | orchestrator | 2026-04-01 04:20:02 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:20:02.677872 | orchestrator | 2026-04-01 04:20:02 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:20:02.677961 | orchestrator | 2026-04-01 04:20:02 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:20:05.725274 | orchestrator | 2026-04-01 04:20:05 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:20:05.726961 | orchestrator | 2026-04-01 04:20:05 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:20:05.727188 | orchestrator | 2026-04-01 04:20:05 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:20:08.776405 | orchestrator | 2026-04-01 04:20:08 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:20:08.777668 | orchestrator | 2026-04-01 04:20:08 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:20:08.777778 | orchestrator | 2026-04-01 04:20:08 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:20:11.827477 | orchestrator | 2026-04-01 04:20:11 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:20:11.828969 | orchestrator | 2026-04-01 04:20:11 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:20:11.829010 | orchestrator | 2026-04-01 04:20:11 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:20:14.875817 | orchestrator | 2026-04-01 04:20:14 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:20:14.876773 | orchestrator | 2026-04-01 04:20:14 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:20:14.876887 | orchestrator | 2026-04-01 04:20:14 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:20:17.921838 | orchestrator | 2026-04-01 04:20:17 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:20:17.924052 | orchestrator | 2026-04-01 04:20:17 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:20:17.924144 | orchestrator | 2026-04-01 04:20:17 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:20:20.964839 | orchestrator | 2026-04-01 04:20:20 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:20:20.966231 | orchestrator | 2026-04-01 04:20:20 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:20:20.966278 | orchestrator | 2026-04-01 04:20:20 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:20:24.016512 | orchestrator | 2026-04-01 04:20:24 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:20:24.018498 | orchestrator | 2026-04-01 04:20:24 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:20:24.018601 | orchestrator | 2026-04-01 04:20:24 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:20:27.071362 | orchestrator | 2026-04-01 04:20:27 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:20:27.072419 | orchestrator | 2026-04-01 04:20:27 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:20:27.072473 | orchestrator | 2026-04-01 04:20:27 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:20:30.121249 | orchestrator | 2026-04-01 04:20:30 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:20:30.123268 | orchestrator | 2026-04-01 04:20:30 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:20:30.123402 | orchestrator | 2026-04-01 04:20:30 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:20:33.174123 | orchestrator | 2026-04-01 04:20:33 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:20:33.174853 | orchestrator | 2026-04-01 04:20:33 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:20:33.175538 | orchestrator | 2026-04-01 04:20:33 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:20:36.216336 | orchestrator | 2026-04-01 04:20:36 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:20:36.217636 | orchestrator | 2026-04-01 04:20:36 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:20:36.217701 | orchestrator | 2026-04-01 04:20:36 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:20:39.268917 | orchestrator | 2026-04-01 04:20:39 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:20:39.270920 | orchestrator | 2026-04-01 04:20:39 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:20:39.271001 | orchestrator | 2026-04-01 04:20:39 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:20:42.320012 | orchestrator | 2026-04-01 04:20:42 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:20:42.321834 | orchestrator | 2026-04-01 04:20:42 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:20:42.321884 | orchestrator | 2026-04-01 04:20:42 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:20:45.373879 | orchestrator | 2026-04-01 04:20:45 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:20:45.375542 | orchestrator | 2026-04-01 04:20:45 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:20:45.375693 | orchestrator | 2026-04-01 04:20:45 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:20:48.425450 | orchestrator | 2026-04-01 04:20:48 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:20:48.427364 | orchestrator | 2026-04-01 04:20:48 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:20:48.427424 | orchestrator | 2026-04-01 04:20:48 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:20:51.476231 | orchestrator | 2026-04-01 04:20:51 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:20:51.477804 | orchestrator | 2026-04-01 04:20:51 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:20:51.477851 | orchestrator | 2026-04-01 04:20:51 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:20:54.527490 | orchestrator | 2026-04-01 04:20:54 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:20:54.529130 | orchestrator | 2026-04-01 04:20:54 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:20:54.529179 | orchestrator | 2026-04-01 04:20:54 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:20:57.572542 | orchestrator | 2026-04-01 04:20:57 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:20:57.572703 | orchestrator | 2026-04-01 04:20:57 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:20:57.572909 | orchestrator | 2026-04-01 04:20:57 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:21:00.623242 | orchestrator | 2026-04-01 04:21:00 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:21:00.624674 | orchestrator | 2026-04-01 04:21:00 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:21:00.624729 | orchestrator | 2026-04-01 04:21:00 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:21:03.671829 | orchestrator | 2026-04-01 04:21:03 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:21:03.673170 | orchestrator | 2026-04-01 04:21:03 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:21:03.673210 | orchestrator | 2026-04-01 04:21:03 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:21:06.727931 | orchestrator | 2026-04-01 04:21:06 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:21:06.729151 | orchestrator | 2026-04-01 04:21:06 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:21:06.729200 | orchestrator | 2026-04-01 04:21:06 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:21:09.779716 | orchestrator | 2026-04-01 04:21:09 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:21:09.781515 | orchestrator | 2026-04-01 04:21:09 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:21:09.781593 | orchestrator | 2026-04-01 04:21:09 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:21:12.831063 | orchestrator | 2026-04-01 04:21:12 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:21:12.831679 | orchestrator | 2026-04-01 04:21:12 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:21:12.831717 | orchestrator | 2026-04-01 04:21:12 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:21:15.880417 | orchestrator | 2026-04-01 04:21:15 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:21:15.883353 | orchestrator | 2026-04-01 04:21:15 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:21:15.883425 | orchestrator | 2026-04-01 04:21:15 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:21:18.929498 | orchestrator | 2026-04-01 04:21:18 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:21:18.930836 | orchestrator | 2026-04-01 04:21:18 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:21:18.930863 | orchestrator | 2026-04-01 04:21:18 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:21:21.971281 | orchestrator | 2026-04-01 04:21:21 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:21:21.973389 | orchestrator | 2026-04-01 04:21:21 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:21:21.973485 | orchestrator | 2026-04-01 04:21:21 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:21:25.025210 | orchestrator | 2026-04-01 04:21:25 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:21:25.027264 | orchestrator | 2026-04-01 04:21:25 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:21:25.027320 | orchestrator | 2026-04-01 04:21:25 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:21:28.070349 | orchestrator | 2026-04-01 04:21:28 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:21:28.072017 | orchestrator | 2026-04-01 04:21:28 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:21:28.072106 | orchestrator | 2026-04-01 04:21:28 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:21:31.117368 | orchestrator | 2026-04-01 04:21:31 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:21:31.118834 | orchestrator | 2026-04-01 04:21:31 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:21:31.118914 | orchestrator | 2026-04-01 04:21:31 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:21:34.168463 | orchestrator | 2026-04-01 04:21:34 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:21:34.170453 | orchestrator | 2026-04-01 04:21:34 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:21:34.170516 | orchestrator | 2026-04-01 04:21:34 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:21:37.218297 | orchestrator | 2026-04-01 04:21:37 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:21:37.218736 | orchestrator | 2026-04-01 04:21:37 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:21:37.218760 | orchestrator | 2026-04-01 04:21:37 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:21:40.272521 | orchestrator | 2026-04-01 04:21:40 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:21:40.273802 | orchestrator | 2026-04-01 04:21:40 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:21:40.273892 | orchestrator | 2026-04-01 04:21:40 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:21:43.316240 | orchestrator | 2026-04-01 04:21:43 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:21:43.318482 | orchestrator | 2026-04-01 04:21:43 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:21:43.318558 | orchestrator | 2026-04-01 04:21:43 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:21:46.360990 | orchestrator | 2026-04-01 04:21:46 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:21:46.362259 | orchestrator | 2026-04-01 04:21:46 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:21:46.362307 | orchestrator | 2026-04-01 04:21:46 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:21:49.410475 | orchestrator | 2026-04-01 04:21:49 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:21:49.411612 | orchestrator | 2026-04-01 04:21:49 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:21:49.411686 | orchestrator | 2026-04-01 04:21:49 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:21:52.457877 | orchestrator | 2026-04-01 04:21:52 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:21:52.460930 | orchestrator | 2026-04-01 04:21:52 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:21:52.461006 | orchestrator | 2026-04-01 04:21:52 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:21:55.503146 | orchestrator | 2026-04-01 04:21:55 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:21:55.504555 | orchestrator | 2026-04-01 04:21:55 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:21:55.504604 | orchestrator | 2026-04-01 04:21:55 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:21:58.554002 | orchestrator | 2026-04-01 04:21:58 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:21:58.555896 | orchestrator | 2026-04-01 04:21:58 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:21:58.556102 | orchestrator | 2026-04-01 04:21:58 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:22:01.603580 | orchestrator | 2026-04-01 04:22:01 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:22:01.605472 | orchestrator | 2026-04-01 04:22:01 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:22:01.605771 | orchestrator | 2026-04-01 04:22:01 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:22:04.650217 | orchestrator | 2026-04-01 04:22:04 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:22:04.651927 | orchestrator | 2026-04-01 04:22:04 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:22:04.652010 | orchestrator | 2026-04-01 04:22:04 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:22:07.702665 | orchestrator | 2026-04-01 04:22:07 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:22:07.704408 | orchestrator | 2026-04-01 04:22:07 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:22:07.704465 | orchestrator | 2026-04-01 04:22:07 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:22:10.756414 | orchestrator | 2026-04-01 04:22:10 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:22:10.757929 | orchestrator | 2026-04-01 04:22:10 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:22:10.757994 | orchestrator | 2026-04-01 04:22:10 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:22:13.800917 | orchestrator | 2026-04-01 04:22:13 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:22:13.801665 | orchestrator | 2026-04-01 04:22:13 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:22:13.801762 | orchestrator | 2026-04-01 04:22:13 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:22:16.847331 | orchestrator | 2026-04-01 04:22:16 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:22:16.849166 | orchestrator | 2026-04-01 04:22:16 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:22:16.849238 | orchestrator | 2026-04-01 04:22:16 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:22:19.901823 | orchestrator | 2026-04-01 04:22:19 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:22:19.904006 | orchestrator | 2026-04-01 04:22:19 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:22:19.904045 | orchestrator | 2026-04-01 04:22:19 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:22:22.948990 | orchestrator | 2026-04-01 04:22:22 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:22:22.951136 | orchestrator | 2026-04-01 04:22:22 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:22:22.951203 | orchestrator | 2026-04-01 04:22:22 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:22:25.997619 | orchestrator | 2026-04-01 04:22:25 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:22:25.998951 | orchestrator | 2026-04-01 04:22:25 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:22:25.999116 | orchestrator | 2026-04-01 04:22:25 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:22:29.045867 | orchestrator | 2026-04-01 04:22:29 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:22:29.046612 | orchestrator | 2026-04-01 04:22:29 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:22:29.046685 | orchestrator | 2026-04-01 04:22:29 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:22:32.088800 | orchestrator | 2026-04-01 04:22:32 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:22:32.091863 | orchestrator | 2026-04-01 04:22:32 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:22:32.091955 | orchestrator | 2026-04-01 04:22:32 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:22:35.139961 | orchestrator | 2026-04-01 04:22:35 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:22:35.142505 | orchestrator | 2026-04-01 04:22:35 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:22:35.142566 | orchestrator | 2026-04-01 04:22:35 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:22:38.186174 | orchestrator | 2026-04-01 04:22:38 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:22:38.187782 | orchestrator | 2026-04-01 04:22:38 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:22:38.187851 | orchestrator | 2026-04-01 04:22:38 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:22:41.236838 | orchestrator | 2026-04-01 04:22:41 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:22:41.238468 | orchestrator | 2026-04-01 04:22:41 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:22:41.238534 | orchestrator | 2026-04-01 04:22:41 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:22:44.290306 | orchestrator | 2026-04-01 04:22:44 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:22:44.292327 | orchestrator | 2026-04-01 04:22:44 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:22:44.292386 | orchestrator | 2026-04-01 04:22:44 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:22:47.345218 | orchestrator | 2026-04-01 04:22:47 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:22:47.346955 | orchestrator | 2026-04-01 04:22:47 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:22:47.347019 | orchestrator | 2026-04-01 04:22:47 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:22:50.386106 | orchestrator | 2026-04-01 04:22:50 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:22:50.388541 | orchestrator | 2026-04-01 04:22:50 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:22:50.388598 | orchestrator | 2026-04-01 04:22:50 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:22:53.440626 | orchestrator | 2026-04-01 04:22:53 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:22:53.443077 | orchestrator | 2026-04-01 04:22:53 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:22:53.443139 | orchestrator | 2026-04-01 04:22:53 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:22:56.491433 | orchestrator | 2026-04-01 04:22:56 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:22:56.492995 | orchestrator | 2026-04-01 04:22:56 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:22:56.493051 | orchestrator | 2026-04-01 04:22:56 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:22:59.541231 | orchestrator | 2026-04-01 04:22:59 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:22:59.542583 | orchestrator | 2026-04-01 04:22:59 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:22:59.542632 | orchestrator | 2026-04-01 04:22:59 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:23:02.586818 | orchestrator | 2026-04-01 04:23:02 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:23:02.588484 | orchestrator | 2026-04-01 04:23:02 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:23:02.588628 | orchestrator | 2026-04-01 04:23:02 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:23:05.635645 | orchestrator | 2026-04-01 04:23:05 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:23:05.637276 | orchestrator | 2026-04-01 04:23:05 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:23:05.637338 | orchestrator | 2026-04-01 04:23:05 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:23:08.681786 | orchestrator | 2026-04-01 04:23:08 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:23:08.682525 | orchestrator | 2026-04-01 04:23:08 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:23:08.682554 | orchestrator | 2026-04-01 04:23:08 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:23:11.725457 | orchestrator | 2026-04-01 04:23:11 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:23:11.728171 | orchestrator | 2026-04-01 04:23:11 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:23:11.728230 | orchestrator | 2026-04-01 04:23:11 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:23:14.778988 | orchestrator | 2026-04-01 04:23:14 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:23:14.780462 | orchestrator | 2026-04-01 04:23:14 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:23:14.780727 | orchestrator | 2026-04-01 04:23:14 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:23:17.830141 | orchestrator | 2026-04-01 04:23:17 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:23:17.831603 | orchestrator | 2026-04-01 04:23:17 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:23:17.831733 | orchestrator | 2026-04-01 04:23:17 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:23:20.881342 | orchestrator | 2026-04-01 04:23:20 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:23:20.883799 | orchestrator | 2026-04-01 04:23:20 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:23:20.883853 | orchestrator | 2026-04-01 04:23:20 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:23:23.933969 | orchestrator | 2026-04-01 04:23:23 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:23:23.935208 | orchestrator | 2026-04-01 04:23:23 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:23:23.935255 | orchestrator | 2026-04-01 04:23:23 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:23:26.984124 | orchestrator | 2026-04-01 04:23:26 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:23:26.985692 | orchestrator | 2026-04-01 04:23:26 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:23:26.985785 | orchestrator | 2026-04-01 04:23:26 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:23:30.031572 | orchestrator | 2026-04-01 04:23:30 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:23:30.032230 | orchestrator | 2026-04-01 04:23:30 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:23:30.032496 | orchestrator | 2026-04-01 04:23:30 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:23:33.076002 | orchestrator | 2026-04-01 04:23:33 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:23:33.077842 | orchestrator | 2026-04-01 04:23:33 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:23:33.077930 | orchestrator | 2026-04-01 04:23:33 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:23:36.121895 | orchestrator | 2026-04-01 04:23:36 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:23:36.122308 | orchestrator | 2026-04-01 04:23:36 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:23:36.122767 | orchestrator | 2026-04-01 04:23:36 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:23:39.174636 | orchestrator | 2026-04-01 04:23:39 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:23:39.176740 | orchestrator | 2026-04-01 04:23:39 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:23:39.176820 | orchestrator | 2026-04-01 04:23:39 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:23:42.224491 | orchestrator | 2026-04-01 04:23:42 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:23:42.227131 | orchestrator | 2026-04-01 04:23:42 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:23:42.227293 | orchestrator | 2026-04-01 04:23:42 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:23:45.280313 | orchestrator | 2026-04-01 04:23:45 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:23:45.281782 | orchestrator | 2026-04-01 04:23:45 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:23:45.281854 | orchestrator | 2026-04-01 04:23:45 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:23:48.333159 | orchestrator | 2026-04-01 04:23:48 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:23:48.335732 | orchestrator | 2026-04-01 04:23:48 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:23:48.335834 | orchestrator | 2026-04-01 04:23:48 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:23:51.383703 | orchestrator | 2026-04-01 04:23:51 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:23:51.385704 | orchestrator | 2026-04-01 04:23:51 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:23:51.385742 | orchestrator | 2026-04-01 04:23:51 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:23:54.437229 | orchestrator | 2026-04-01 04:23:54 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:23:54.438792 | orchestrator | 2026-04-01 04:23:54 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:23:54.438934 | orchestrator | 2026-04-01 04:23:54 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:23:57.490280 | orchestrator | 2026-04-01 04:23:57 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:23:57.491670 | orchestrator | 2026-04-01 04:23:57 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:23:57.491707 | orchestrator | 2026-04-01 04:23:57 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:24:00.538389 | orchestrator | 2026-04-01 04:24:00 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:24:00.540938 | orchestrator | 2026-04-01 04:24:00 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:24:00.540980 | orchestrator | 2026-04-01 04:24:00 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:24:03.589163 | orchestrator | 2026-04-01 04:24:03 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:24:03.590938 | orchestrator | 2026-04-01 04:24:03 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:24:03.591012 | orchestrator | 2026-04-01 04:24:03 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:24:06.638626 | orchestrator | 2026-04-01 04:24:06 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:24:06.639191 | orchestrator | 2026-04-01 04:24:06 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:24:06.639244 | orchestrator | 2026-04-01 04:24:06 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:24:09.691162 | orchestrator | 2026-04-01 04:24:09 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:24:09.693063 | orchestrator | 2026-04-01 04:24:09 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:24:09.693135 | orchestrator | 2026-04-01 04:24:09 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:24:12.734793 | orchestrator | 2026-04-01 04:24:12 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:24:12.736394 | orchestrator | 2026-04-01 04:24:12 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:24:12.736449 | orchestrator | 2026-04-01 04:24:12 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:24:15.778694 | orchestrator | 2026-04-01 04:24:15 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:24:15.779790 | orchestrator | 2026-04-01 04:24:15 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:24:15.779836 | orchestrator | 2026-04-01 04:24:15 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:24:18.823666 | orchestrator | 2026-04-01 04:24:18 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:24:18.824668 | orchestrator | 2026-04-01 04:24:18 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:24:18.824726 | orchestrator | 2026-04-01 04:24:18 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:24:21.877776 | orchestrator | 2026-04-01 04:24:21 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:24:21.881024 | orchestrator | 2026-04-01 04:24:21 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:24:21.881154 | orchestrator | 2026-04-01 04:24:21 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:24:24.930524 | orchestrator | 2026-04-01 04:24:24 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:24:24.932956 | orchestrator | 2026-04-01 04:24:24 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:24:24.933300 | orchestrator | 2026-04-01 04:24:24 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:24:27.987731 | orchestrator | 2026-04-01 04:24:27 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:24:27.989352 | orchestrator | 2026-04-01 04:24:27 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:24:27.989427 | orchestrator | 2026-04-01 04:24:27 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:24:31.043276 | orchestrator | 2026-04-01 04:24:31 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:24:31.045951 | orchestrator | 2026-04-01 04:24:31 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:24:31.045990 | orchestrator | 2026-04-01 04:24:31 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:24:34.097489 | orchestrator | 2026-04-01 04:24:34 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:24:34.100108 | orchestrator | 2026-04-01 04:24:34 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:24:34.100157 | orchestrator | 2026-04-01 04:24:34 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:24:37.147984 | orchestrator | 2026-04-01 04:24:37 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:24:37.151371 | orchestrator | 2026-04-01 04:24:37 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:24:37.151438 | orchestrator | 2026-04-01 04:24:37 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:24:40.205474 | orchestrator | 2026-04-01 04:24:40 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:24:40.207814 | orchestrator | 2026-04-01 04:24:40 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:24:40.208440 | orchestrator | 2026-04-01 04:24:40 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:24:43.254699 | orchestrator | 2026-04-01 04:24:43 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:24:43.256841 | orchestrator | 2026-04-01 04:24:43 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:24:43.256912 | orchestrator | 2026-04-01 04:24:43 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:24:46.303481 | orchestrator | 2026-04-01 04:24:46 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:24:46.304606 | orchestrator | 2026-04-01 04:24:46 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:24:46.304682 | orchestrator | 2026-04-01 04:24:46 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:24:49.351995 | orchestrator | 2026-04-01 04:24:49 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:24:49.354237 | orchestrator | 2026-04-01 04:24:49 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:24:49.354324 | orchestrator | 2026-04-01 04:24:49 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:24:52.402747 | orchestrator | 2026-04-01 04:24:52 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:24:52.404522 | orchestrator | 2026-04-01 04:24:52 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:24:52.404623 | orchestrator | 2026-04-01 04:24:52 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:24:55.457598 | orchestrator | 2026-04-01 04:24:55 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:24:55.459663 | orchestrator | 2026-04-01 04:24:55 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:24:55.459732 | orchestrator | 2026-04-01 04:24:55 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:24:58.509140 | orchestrator | 2026-04-01 04:24:58 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:24:58.510661 | orchestrator | 2026-04-01 04:24:58 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:24:58.510737 | orchestrator | 2026-04-01 04:24:58 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:25:01.556289 | orchestrator | 2026-04-01 04:25:01 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:25:01.557175 | orchestrator | 2026-04-01 04:25:01 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:25:01.557228 | orchestrator | 2026-04-01 04:25:01 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:25:04.601411 | orchestrator | 2026-04-01 04:25:04 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:25:04.602355 | orchestrator | 2026-04-01 04:25:04 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:25:04.602882 | orchestrator | 2026-04-01 04:25:04 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:25:07.649853 | orchestrator | 2026-04-01 04:25:07 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:25:07.652529 | orchestrator | 2026-04-01 04:25:07 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:25:07.652561 | orchestrator | 2026-04-01 04:25:07 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:25:10.705613 | orchestrator | 2026-04-01 04:25:10 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:25:10.707164 | orchestrator | 2026-04-01 04:25:10 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:25:10.707185 | orchestrator | 2026-04-01 04:25:10 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:25:13.754619 | orchestrator | 2026-04-01 04:25:13 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:25:13.757463 | orchestrator | 2026-04-01 04:25:13 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:25:13.757518 | orchestrator | 2026-04-01 04:25:13 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:25:16.806085 | orchestrator | 2026-04-01 04:25:16 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:25:16.807128 | orchestrator | 2026-04-01 04:25:16 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:25:16.807159 | orchestrator | 2026-04-01 04:25:16 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:25:19.853035 | orchestrator | 2026-04-01 04:25:19 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:25:19.854300 | orchestrator | 2026-04-01 04:25:19 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:25:19.854371 | orchestrator | 2026-04-01 04:25:19 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:25:22.902329 | orchestrator | 2026-04-01 04:25:22 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:25:22.902889 | orchestrator | 2026-04-01 04:25:22 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:25:22.902921 | orchestrator | 2026-04-01 04:25:22 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:25:25.954800 | orchestrator | 2026-04-01 04:25:25 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:25:25.955465 | orchestrator | 2026-04-01 04:25:25 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:25:25.955620 | orchestrator | 2026-04-01 04:25:25 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:25:28.996105 | orchestrator | 2026-04-01 04:25:28 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:25:28.997512 | orchestrator | 2026-04-01 04:25:28 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:25:28.997525 | orchestrator | 2026-04-01 04:25:28 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:25:32.045022 | orchestrator | 2026-04-01 04:25:32 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:25:32.047039 | orchestrator | 2026-04-01 04:25:32 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:25:32.047228 | orchestrator | 2026-04-01 04:25:32 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:25:35.090447 | orchestrator | 2026-04-01 04:25:35 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:25:35.090551 | orchestrator | 2026-04-01 04:25:35 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:25:35.090568 | orchestrator | 2026-04-01 04:25:35 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:25:38.141909 | orchestrator | 2026-04-01 04:25:38 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:25:38.144689 | orchestrator | 2026-04-01 04:25:38 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:25:38.144738 | orchestrator | 2026-04-01 04:25:38 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:25:41.190467 | orchestrator | 2026-04-01 04:25:41 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:25:41.193987 | orchestrator | 2026-04-01 04:25:41 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:25:41.194097 | orchestrator | 2026-04-01 04:25:41 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:25:44.241686 | orchestrator | 2026-04-01 04:25:44 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:25:44.243233 | orchestrator | 2026-04-01 04:25:44 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:25:44.243274 | orchestrator | 2026-04-01 04:25:44 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:25:47.292863 | orchestrator | 2026-04-01 04:25:47 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:25:47.294344 | orchestrator | 2026-04-01 04:25:47 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:25:47.294638 | orchestrator | 2026-04-01 04:25:47 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:25:50.343677 | orchestrator | 2026-04-01 04:25:50 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:25:50.345345 | orchestrator | 2026-04-01 04:25:50 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:25:50.345491 | orchestrator | 2026-04-01 04:25:50 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:25:53.394531 | orchestrator | 2026-04-01 04:25:53 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:25:53.396394 | orchestrator | 2026-04-01 04:25:53 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:25:53.396498 | orchestrator | 2026-04-01 04:25:53 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:25:56.453935 | orchestrator | 2026-04-01 04:25:56 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:25:56.455670 | orchestrator | 2026-04-01 04:25:56 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:25:56.455713 | orchestrator | 2026-04-01 04:25:56 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:25:59.502697 | orchestrator | 2026-04-01 04:25:59 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:25:59.504092 | orchestrator | 2026-04-01 04:25:59 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:25:59.504137 | orchestrator | 2026-04-01 04:25:59 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:26:02.554830 | orchestrator | 2026-04-01 04:26:02 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:26:02.557656 | orchestrator | 2026-04-01 04:26:02 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:26:02.557794 | orchestrator | 2026-04-01 04:26:02 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:26:05.607319 | orchestrator | 2026-04-01 04:26:05 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:26:05.608779 | orchestrator | 2026-04-01 04:26:05 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:26:05.608835 | orchestrator | 2026-04-01 04:26:05 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:26:08.657408 | orchestrator | 2026-04-01 04:26:08 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:26:08.659289 | orchestrator | 2026-04-01 04:26:08 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:26:08.659324 | orchestrator | 2026-04-01 04:26:08 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:26:11.704765 | orchestrator | 2026-04-01 04:26:11 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:26:11.705523 | orchestrator | 2026-04-01 04:26:11 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:26:11.705573 | orchestrator | 2026-04-01 04:26:11 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:26:14.751600 | orchestrator | 2026-04-01 04:26:14 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:26:14.754153 | orchestrator | 2026-04-01 04:26:14 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:26:14.754538 | orchestrator | 2026-04-01 04:26:14 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:26:17.807371 | orchestrator | 2026-04-01 04:26:17 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:26:17.808867 | orchestrator | 2026-04-01 04:26:17 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:26:17.809032 | orchestrator | 2026-04-01 04:26:17 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:26:20.849291 | orchestrator | 2026-04-01 04:26:20 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:26:20.849743 | orchestrator | 2026-04-01 04:26:20 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:26:20.849882 | orchestrator | 2026-04-01 04:26:20 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:26:23.905842 | orchestrator | 2026-04-01 04:26:23 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:26:23.908602 | orchestrator | 2026-04-01 04:26:23 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:26:23.908660 | orchestrator | 2026-04-01 04:26:23 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:26:26.957524 | orchestrator | 2026-04-01 04:26:26 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:26:26.960028 | orchestrator | 2026-04-01 04:26:26 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:26:26.960126 | orchestrator | 2026-04-01 04:26:26 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:26:30.007469 | orchestrator | 2026-04-01 04:26:30 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:26:30.010269 | orchestrator | 2026-04-01 04:26:30 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:26:30.010358 | orchestrator | 2026-04-01 04:26:30 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:26:33.060414 | orchestrator | 2026-04-01 04:26:33 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:26:33.062093 | orchestrator | 2026-04-01 04:26:33 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:26:33.062390 | orchestrator | 2026-04-01 04:26:33 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:26:36.111723 | orchestrator | 2026-04-01 04:26:36 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:26:36.112665 | orchestrator | 2026-04-01 04:26:36 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:26:36.112965 | orchestrator | 2026-04-01 04:26:36 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:26:39.163564 | orchestrator | 2026-04-01 04:26:39 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:26:39.165279 | orchestrator | 2026-04-01 04:26:39 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:26:39.165418 | orchestrator | 2026-04-01 04:26:39 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:26:42.216132 | orchestrator | 2026-04-01 04:26:42 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:26:42.218230 | orchestrator | 2026-04-01 04:26:42 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:26:42.218270 | orchestrator | 2026-04-01 04:26:42 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:26:45.258727 | orchestrator | 2026-04-01 04:26:45 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:26:45.260493 | orchestrator | 2026-04-01 04:26:45 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:26:45.260623 | orchestrator | 2026-04-01 04:26:45 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:26:48.313126 | orchestrator | 2026-04-01 04:26:48 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:26:48.315007 | orchestrator | 2026-04-01 04:26:48 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:26:48.315049 | orchestrator | 2026-04-01 04:26:48 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:26:51.362494 | orchestrator | 2026-04-01 04:26:51 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:26:51.364829 | orchestrator | 2026-04-01 04:26:51 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:26:51.364951 | orchestrator | 2026-04-01 04:26:51 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:26:54.410423 | orchestrator | 2026-04-01 04:26:54 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:26:54.412936 | orchestrator | 2026-04-01 04:26:54 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:26:54.413005 | orchestrator | 2026-04-01 04:26:54 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:26:57.464557 | orchestrator | 2026-04-01 04:26:57 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:26:57.466805 | orchestrator | 2026-04-01 04:26:57 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:26:57.466925 | orchestrator | 2026-04-01 04:26:57 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:27:00.507705 | orchestrator | 2026-04-01 04:27:00 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:27:00.512556 | orchestrator | 2026-04-01 04:27:00 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:27:00.512704 | orchestrator | 2026-04-01 04:27:00 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:27:03.564813 | orchestrator | 2026-04-01 04:27:03 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:27:03.566403 | orchestrator | 2026-04-01 04:27:03 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:27:03.566483 | orchestrator | 2026-04-01 04:27:03 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:27:06.615814 | orchestrator | 2026-04-01 04:27:06 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:27:06.617755 | orchestrator | 2026-04-01 04:27:06 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:27:06.617829 | orchestrator | 2026-04-01 04:27:06 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:27:09.670940 | orchestrator | 2026-04-01 04:27:09 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:27:09.672514 | orchestrator | 2026-04-01 04:27:09 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:27:09.672559 | orchestrator | 2026-04-01 04:27:09 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:27:12.729638 | orchestrator | 2026-04-01 04:27:12 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:27:12.730662 | orchestrator | 2026-04-01 04:27:12 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:27:12.730764 | orchestrator | 2026-04-01 04:27:12 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:27:15.778011 | orchestrator | 2026-04-01 04:27:15 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:27:15.779735 | orchestrator | 2026-04-01 04:27:15 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:27:15.780275 | orchestrator | 2026-04-01 04:27:15 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:27:18.825628 | orchestrator | 2026-04-01 04:27:18 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:27:18.826339 | orchestrator | 2026-04-01 04:27:18 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:27:18.826377 | orchestrator | 2026-04-01 04:27:18 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:27:21.875360 | orchestrator | 2026-04-01 04:27:21 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:27:21.877728 | orchestrator | 2026-04-01 04:27:21 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:27:21.877776 | orchestrator | 2026-04-01 04:27:21 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:27:24.926902 | orchestrator | 2026-04-01 04:27:24 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:27:24.927969 | orchestrator | 2026-04-01 04:27:24 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:27:24.928023 | orchestrator | 2026-04-01 04:27:24 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:27:27.985299 | orchestrator | 2026-04-01 04:27:27 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:27:27.988012 | orchestrator | 2026-04-01 04:27:27 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:27:27.988090 | orchestrator | 2026-04-01 04:27:27 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:27:31.045038 | orchestrator | 2026-04-01 04:27:31 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:27:31.047329 | orchestrator | 2026-04-01 04:27:31 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:27:31.048218 | orchestrator | 2026-04-01 04:27:31 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:27:34.099122 | orchestrator | 2026-04-01 04:27:34 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:27:34.100678 | orchestrator | 2026-04-01 04:27:34 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:27:34.100714 | orchestrator | 2026-04-01 04:27:34 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:27:37.147944 | orchestrator | 2026-04-01 04:27:37 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:27:37.149917 | orchestrator | 2026-04-01 04:27:37 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:27:37.149974 | orchestrator | 2026-04-01 04:27:37 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:27:40.195127 | orchestrator | 2026-04-01 04:27:40 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:27:40.197495 | orchestrator | 2026-04-01 04:27:40 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:27:40.197712 | orchestrator | 2026-04-01 04:27:40 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:27:43.247862 | orchestrator | 2026-04-01 04:27:43 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:27:43.248898 | orchestrator | 2026-04-01 04:27:43 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:27:43.248933 | orchestrator | 2026-04-01 04:27:43 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:27:46.302527 | orchestrator | 2026-04-01 04:27:46 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:27:46.303878 | orchestrator | 2026-04-01 04:27:46 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:27:46.303936 | orchestrator | 2026-04-01 04:27:46 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:27:49.354647 | orchestrator | 2026-04-01 04:27:49 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:29:49.449295 | orchestrator | 2026-04-01 04:29:49 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:29:49.449408 | orchestrator | 2026-04-01 04:29:49 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:29:52.493950 | orchestrator | 2026-04-01 04:29:52 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:29:52.496256 | orchestrator | 2026-04-01 04:29:52 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:29:52.496311 | orchestrator | 2026-04-01 04:29:52 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:29:55.537608 | orchestrator | 2026-04-01 04:29:55 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:29:55.539910 | orchestrator | 2026-04-01 04:29:55 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:29:55.539988 | orchestrator | 2026-04-01 04:29:55 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:29:58.592022 | orchestrator | 2026-04-01 04:29:58 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:29:58.592728 | orchestrator | 2026-04-01 04:29:58 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:29:58.593413 | orchestrator | 2026-04-01 04:29:58 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:30:01.638204 | orchestrator | 2026-04-01 04:30:01 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:30:01.638457 | orchestrator | 2026-04-01 04:30:01 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:30:01.639093 | orchestrator | 2026-04-01 04:30:01 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:30:04.684761 | orchestrator | 2026-04-01 04:30:04 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:30:04.686964 | orchestrator | 2026-04-01 04:30:04 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:30:04.687010 | orchestrator | 2026-04-01 04:30:04 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:30:07.734337 | orchestrator | 2026-04-01 04:30:07 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:30:07.736159 | orchestrator | 2026-04-01 04:30:07 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:30:07.736210 | orchestrator | 2026-04-01 04:30:07 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:30:10.784815 | orchestrator | 2026-04-01 04:30:10 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:30:10.786909 | orchestrator | 2026-04-01 04:30:10 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:30:10.786997 | orchestrator | 2026-04-01 04:30:10 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:30:13.825918 | orchestrator | 2026-04-01 04:30:13 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:30:13.826229 | orchestrator | 2026-04-01 04:30:13 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:30:13.826272 | orchestrator | 2026-04-01 04:30:13 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:30:16.866326 | orchestrator | 2026-04-01 04:30:16 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:30:16.866734 | orchestrator | 2026-04-01 04:30:16 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:30:16.866779 | orchestrator | 2026-04-01 04:30:16 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:30:19.913791 | orchestrator | 2026-04-01 04:30:19 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:30:19.913901 | orchestrator | 2026-04-01 04:30:19 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:30:19.914114 | orchestrator | 2026-04-01 04:30:19 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:30:22.957432 | orchestrator | 2026-04-01 04:30:22 | INFO  | Task c1541cda-9028-417f-bdfe-1444d21f7539 is in state STARTED 2026-04-01 04:30:22.958822 | orchestrator | 2026-04-01 04:30:22 | INFO  | Task 26afe088-ea9f-472a-a860-0310c526e635 is in state STARTED 2026-04-01 04:30:22.958861 | orchestrator | 2026-04-01 04:30:22 | INFO  | Wait 1 second(s) until the next check 2026-04-01 04:30:24.745536 | RUN END RESULT_TIMED_OUT: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2026-04-01 04:30:24.748863 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-04-01 04:30:25.594923 | 2026-04-01 04:30:25.595102 | PLAY [Post output play] 2026-04-01 04:30:25.616172 | 2026-04-01 04:30:25.616359 | LOOP [stage-output : Register sources] 2026-04-01 04:30:25.688351 | 2026-04-01 04:30:25.688724 | TASK [stage-output : Check sudo] 2026-04-01 04:30:26.573647 | orchestrator | sudo: a password is required 2026-04-01 04:30:26.728499 | orchestrator | ok: Runtime: 0:00:00.015324 2026-04-01 04:30:26.743678 | 2026-04-01 04:30:26.743851 | LOOP [stage-output : Set source and destination for files and folders] 2026-04-01 04:30:26.781942 | 2026-04-01 04:30:26.782219 | TASK [stage-output : Build a list of source, dest dictionaries] 2026-04-01 04:30:26.861123 | orchestrator | ok 2026-04-01 04:30:26.871595 | 2026-04-01 04:30:26.871754 | LOOP [stage-output : Ensure target folders exist] 2026-04-01 04:30:27.341825 | orchestrator | ok: "docs" 2026-04-01 04:30:27.342144 | 2026-04-01 04:30:27.615247 | orchestrator | ok: "artifacts" 2026-04-01 04:30:27.889554 | orchestrator | ok: "logs" 2026-04-01 04:30:27.907744 | 2026-04-01 04:30:27.907956 | LOOP [stage-output : Copy files and folders to staging folder] 2026-04-01 04:30:27.946931 | 2026-04-01 04:30:27.947229 | TASK [stage-output : Make all log files readable] 2026-04-01 04:30:28.243347 | orchestrator | ok 2026-04-01 04:30:28.252955 | 2026-04-01 04:30:28.253088 | TASK [stage-output : Rename log files that match extensions_to_txt] 2026-04-01 04:30:28.287870 | orchestrator | skipping: Conditional result was False 2026-04-01 04:30:28.308115 | 2026-04-01 04:30:28.308322 | TASK [stage-output : Discover log files for compression] 2026-04-01 04:30:28.343286 | orchestrator | skipping: Conditional result was False 2026-04-01 04:30:28.352316 | 2026-04-01 04:30:28.352487 | LOOP [stage-output : Archive everything from logs] 2026-04-01 04:30:28.390553 | 2026-04-01 04:30:28.390728 | PLAY [Post cleanup play] 2026-04-01 04:30:28.399846 | 2026-04-01 04:30:28.399964 | TASK [Set cloud fact (Zuul deployment)] 2026-04-01 04:30:28.458021 | orchestrator | ok 2026-04-01 04:30:28.471154 | 2026-04-01 04:30:28.471305 | TASK [Set cloud fact (local deployment)] 2026-04-01 04:30:28.506904 | orchestrator | skipping: Conditional result was False 2026-04-01 04:30:28.522137 | 2026-04-01 04:30:28.522300 | TASK [Clean the cloud environment] 2026-04-01 04:30:30.652926 | orchestrator | 2026-04-01 04:30:30 - clean up servers 2026-04-01 04:30:31.512077 | orchestrator | 2026-04-01 04:30:31 - testbed-manager 2026-04-01 04:30:31.607890 | orchestrator | 2026-04-01 04:30:31 - testbed-node-5 2026-04-01 04:30:31.699897 | orchestrator | 2026-04-01 04:30:31 - testbed-node-2 2026-04-01 04:30:31.783363 | orchestrator | 2026-04-01 04:30:31 - testbed-node-4 2026-04-01 04:30:31.873632 | orchestrator | 2026-04-01 04:30:31 - testbed-node-0 2026-04-01 04:30:31.963072 | orchestrator | 2026-04-01 04:30:31 - testbed-node-3 2026-04-01 04:30:32.055153 | orchestrator | 2026-04-01 04:30:32 - testbed-node-1 2026-04-01 04:30:32.147358 | orchestrator | 2026-04-01 04:30:32 - clean up keypairs 2026-04-01 04:30:32.169479 | orchestrator | 2026-04-01 04:30:32 - testbed 2026-04-01 04:30:32.195270 | orchestrator | 2026-04-01 04:30:32 - wait for servers to be gone 2026-04-01 04:30:41.148807 | orchestrator | 2026-04-01 04:30:41 - clean up ports 2026-04-01 04:30:41.363414 | orchestrator | 2026-04-01 04:30:41 - 00e5b56c-6b35-4e88-abec-e08a4d55e2a1 2026-04-01 04:30:41.657762 | orchestrator | 2026-04-01 04:30:41 - 1c0f3d37-f062-44ce-96be-e8f94c70a28f 2026-04-01 04:30:41.921244 | orchestrator | 2026-04-01 04:30:41 - 5c4c43eb-7eb5-4e70-bddc-22f3a0eb758a 2026-04-01 04:30:42.154341 | orchestrator | 2026-04-01 04:30:42 - 62dcba12-addc-451e-9a61-78d47a4e2eef 2026-04-01 04:30:42.666325 | orchestrator | 2026-04-01 04:30:42 - 8afaa5f5-c130-4dc1-99f9-3ca1dae4c2c1 2026-04-01 04:30:42.892620 | orchestrator | 2026-04-01 04:30:42 - 90ecc4f2-c472-426a-937f-6901c27de743 2026-04-01 04:30:43.130973 | orchestrator | 2026-04-01 04:30:43 - 9cb1feae-a58a-4c48-86d3-e38988c0517c 2026-04-01 04:30:43.344148 | orchestrator | 2026-04-01 04:30:43 - clean up volumes 2026-04-01 04:30:43.468280 | orchestrator | 2026-04-01 04:30:43 - testbed-volume-4-node-base 2026-04-01 04:30:43.508974 | orchestrator | 2026-04-01 04:30:43 - testbed-volume-5-node-base 2026-04-01 04:30:43.553099 | orchestrator | 2026-04-01 04:30:43 - testbed-volume-2-node-base 2026-04-01 04:30:43.599905 | orchestrator | 2026-04-01 04:30:43 - testbed-volume-0-node-base 2026-04-01 04:30:43.731423 | orchestrator | 2026-04-01 04:30:43 - testbed-volume-1-node-base 2026-04-01 04:30:43.781544 | orchestrator | 2026-04-01 04:30:43 - testbed-volume-3-node-base 2026-04-01 04:30:43.835415 | orchestrator | 2026-04-01 04:30:43 - testbed-volume-8-node-5 2026-04-01 04:30:43.888543 | orchestrator | 2026-04-01 04:30:43 - testbed-volume-manager-base 2026-04-01 04:30:43.955431 | orchestrator | 2026-04-01 04:30:43 - testbed-volume-5-node-5 2026-04-01 04:30:44.017629 | orchestrator | 2026-04-01 04:30:44 - testbed-volume-2-node-5 2026-04-01 04:30:44.068963 | orchestrator | 2026-04-01 04:30:44 - testbed-volume-0-node-3 2026-04-01 04:30:44.130614 | orchestrator | 2026-04-01 04:30:44 - testbed-volume-7-node-4 2026-04-01 04:30:44.186222 | orchestrator | 2026-04-01 04:30:44 - testbed-volume-6-node-3 2026-04-01 04:30:44.238082 | orchestrator | 2026-04-01 04:30:44 - testbed-volume-4-node-4 2026-04-01 04:30:44.288910 | orchestrator | 2026-04-01 04:30:44 - testbed-volume-1-node-4 2026-04-01 04:30:44.341330 | orchestrator | 2026-04-01 04:30:44 - testbed-volume-3-node-3 2026-04-01 04:30:44.395036 | orchestrator | 2026-04-01 04:30:44 - disconnect routers 2026-04-01 04:30:44.526953 | orchestrator | 2026-04-01 04:30:44 - testbed 2026-04-01 04:30:45.676334 | orchestrator | 2026-04-01 04:30:45 - clean up subnets 2026-04-01 04:30:45.739056 | orchestrator | 2026-04-01 04:30:45 - subnet-testbed-management 2026-04-01 04:30:45.933663 | orchestrator | 2026-04-01 04:30:45 - clean up networks 2026-04-01 04:30:46.108005 | orchestrator | 2026-04-01 04:30:46 - net-testbed-management 2026-04-01 04:30:46.458390 | orchestrator | 2026-04-01 04:30:46 - clean up security groups 2026-04-01 04:30:46.510564 | orchestrator | 2026-04-01 04:30:46 - testbed-node 2026-04-01 04:30:46.648154 | orchestrator | 2026-04-01 04:30:46 - testbed-management 2026-04-01 04:30:46.793875 | orchestrator | 2026-04-01 04:30:46 - clean up floating ips 2026-04-01 04:30:46.830387 | orchestrator | 2026-04-01 04:30:46 - 81.163.192.126 2026-04-01 04:30:47.283893 | orchestrator | 2026-04-01 04:30:47 - clean up routers 2026-04-01 04:30:47.403923 | orchestrator | 2026-04-01 04:30:47 - testbed 2026-04-01 04:30:49.080715 | orchestrator | ok: Runtime: 0:00:20.193389 2026-04-01 04:30:49.084150 | 2026-04-01 04:30:49.084276 | PLAY RECAP 2026-04-01 04:30:49.084369 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2026-04-01 04:30:49.084411 | 2026-04-01 04:30:49.229369 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-04-01 04:30:49.230639 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-04-01 04:30:50.038634 | 2026-04-01 04:30:50.038806 | PLAY [Cleanup play] 2026-04-01 04:30:50.056967 | 2026-04-01 04:30:50.057128 | TASK [Set cloud fact (Zuul deployment)] 2026-04-01 04:30:50.113607 | orchestrator | ok 2026-04-01 04:30:50.127090 | 2026-04-01 04:30:50.127264 | TASK [Set cloud fact (local deployment)] 2026-04-01 04:30:50.162105 | orchestrator | skipping: Conditional result was False 2026-04-01 04:30:50.178620 | 2026-04-01 04:30:50.178765 | TASK [Clean the cloud environment] 2026-04-01 04:30:51.370224 | orchestrator | 2026-04-01 04:30:51 - clean up servers 2026-04-01 04:30:51.962546 | orchestrator | 2026-04-01 04:30:51 - clean up keypairs 2026-04-01 04:30:51.982788 | orchestrator | 2026-04-01 04:30:51 - wait for servers to be gone 2026-04-01 04:30:52.023742 | orchestrator | 2026-04-01 04:30:52 - clean up ports 2026-04-01 04:30:52.110729 | orchestrator | 2026-04-01 04:30:52 - clean up volumes 2026-04-01 04:30:52.171048 | orchestrator | 2026-04-01 04:30:52 - disconnect routers 2026-04-01 04:30:52.194112 | orchestrator | 2026-04-01 04:30:52 - clean up subnets 2026-04-01 04:30:52.222857 | orchestrator | 2026-04-01 04:30:52 - clean up networks 2026-04-01 04:30:52.397665 | orchestrator | 2026-04-01 04:30:52 - clean up security groups 2026-04-01 04:30:52.432929 | orchestrator | 2026-04-01 04:30:52 - clean up floating ips 2026-04-01 04:30:52.460343 | orchestrator | 2026-04-01 04:30:52 - clean up routers 2026-04-01 04:30:52.714356 | orchestrator | ok: Runtime: 0:00:01.494676 2026-04-01 04:30:52.718357 | 2026-04-01 04:30:52.718556 | PLAY RECAP 2026-04-01 04:30:52.718726 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2026-04-01 04:30:52.718831 | 2026-04-01 04:30:52.863810 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-04-01 04:30:52.864983 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-04-01 04:30:53.688908 | 2026-04-01 04:30:53.689073 | PLAY [Base post-fetch] 2026-04-01 04:30:53.704729 | 2026-04-01 04:30:53.704865 | TASK [fetch-output : Set log path for multiple nodes] 2026-04-01 04:30:53.771713 | orchestrator | skipping: Conditional result was False 2026-04-01 04:30:53.786565 | 2026-04-01 04:30:53.786766 | TASK [fetch-output : Set log path for single node] 2026-04-01 04:30:53.841179 | orchestrator | ok 2026-04-01 04:30:53.848028 | 2026-04-01 04:30:53.848147 | LOOP [fetch-output : Ensure local output dirs] 2026-04-01 04:30:54.369986 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/c24d998d74c248cb905c5d59acbcdaec/work/logs" 2026-04-01 04:30:54.652164 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/c24d998d74c248cb905c5d59acbcdaec/work/artifacts" 2026-04-01 04:30:54.928411 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/c24d998d74c248cb905c5d59acbcdaec/work/docs" 2026-04-01 04:30:54.949216 | 2026-04-01 04:30:54.949365 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-04-01 04:30:55.909387 | orchestrator | changed: .d..t...... ./ 2026-04-01 04:30:55.909813 | orchestrator | changed: All items complete 2026-04-01 04:30:55.909880 | 2026-04-01 04:30:56.640298 | orchestrator | changed: .d..t...... ./ 2026-04-01 04:30:57.386502 | orchestrator | changed: .d..t...... ./ 2026-04-01 04:30:57.413657 | 2026-04-01 04:30:57.413818 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-04-01 04:30:57.443673 | orchestrator | skipping: Conditional result was False 2026-04-01 04:30:57.446334 | orchestrator | skipping: Conditional result was False 2026-04-01 04:30:57.465559 | 2026-04-01 04:30:57.465687 | PLAY RECAP 2026-04-01 04:30:57.465773 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2026-04-01 04:30:57.465820 | 2026-04-01 04:30:57.629474 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-04-01 04:30:57.631084 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-04-01 04:30:58.389954 | 2026-04-01 04:30:58.390132 | PLAY [Base post] 2026-04-01 04:30:58.405383 | 2026-04-01 04:30:58.405672 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-04-01 04:30:59.398232 | orchestrator | changed 2026-04-01 04:30:59.408767 | 2026-04-01 04:30:59.408911 | PLAY RECAP 2026-04-01 04:30:59.408990 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-04-01 04:30:59.409064 | 2026-04-01 04:30:59.531983 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-04-01 04:30:59.533063 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-04-01 04:31:00.344537 | 2026-04-01 04:31:00.344720 | PLAY [Base post-logs] 2026-04-01 04:31:00.355728 | 2026-04-01 04:31:00.355878 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-04-01 04:31:00.828644 | localhost | changed 2026-04-01 04:31:00.838992 | 2026-04-01 04:31:00.839149 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-04-01 04:31:00.874703 | localhost | ok 2026-04-01 04:31:00.877976 | 2026-04-01 04:31:00.878092 | TASK [Set zuul-log-path fact] 2026-04-01 04:31:00.906509 | localhost | ok 2026-04-01 04:31:00.924612 | 2026-04-01 04:31:00.924782 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-04-01 04:31:00.964544 | localhost | ok 2026-04-01 04:31:00.972632 | 2026-04-01 04:31:00.972799 | TASK [upload-logs : Create log directories] 2026-04-01 04:31:01.495921 | localhost | changed 2026-04-01 04:31:01.501971 | 2026-04-01 04:31:01.502175 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-04-01 04:31:02.056868 | localhost -> localhost | ok: Runtime: 0:00:00.008555 2026-04-01 04:31:02.065753 | 2026-04-01 04:31:02.065986 | TASK [upload-logs : Upload logs to log server] 2026-04-01 04:31:02.703356 | localhost | Output suppressed because no_log was given 2026-04-01 04:31:02.705227 | 2026-04-01 04:31:02.705329 | LOOP [upload-logs : Compress console log and json output] 2026-04-01 04:31:02.762351 | localhost | skipping: Conditional result was False 2026-04-01 04:31:02.767886 | localhost | skipping: Conditional result was False 2026-04-01 04:31:02.777067 | 2026-04-01 04:31:02.777312 | LOOP [upload-logs : Upload compressed console log and json output] 2026-04-01 04:31:02.828770 | localhost | skipping: Conditional result was False 2026-04-01 04:31:02.829353 | 2026-04-01 04:31:02.831277 | localhost | skipping: Conditional result was False 2026-04-01 04:31:02.837585 | 2026-04-01 04:31:02.837769 | LOOP [upload-logs : Upload console log and json output]